Indian Forum for Water Adroit

Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Messages - Alok Pandey

Pages: [1] 2
1
Hydrological sciences / Re: Data for Environmental Intelligence
« on: June 28, 2019, 10:07:02 AM »
Deciding the suitable water meter for your application includes a few necessities: Water Quality, Flow Range ,etc..
The following data will help with the determination of the correct water meter for your site necessities.

 Measuring your water system, Water with a Water Meter is an increasingly precise approach to convey water to a harvest.
 Water Meters screen framework execution and record absolute water connected.
 Water Meters guarantee confirmation of water got versus water siphoned or obtained.
 Provide precise water estimation whenever required by private or administrative office.

Digital Water Meters (DWM) and
http://www.vasthi.com/domestic-house-hold-water-meters/
have for some time been a point of discussion. In any case, the business case for venture contrasts between water organisations, with many as yet conceptualising the job that Domestic House Hold Water Meters will play in administration arrangement, client administration, and activities the board.
The following users thanked this post: Alok Pandey

2
Hydrological sciences / Data for Environmental Intelligence
« on: June 03, 2019, 09:48:26 AM »
A deluge of Earth system data has become available in the past decades including remote sensing, in situ observations from sensors, socioeconomic data and citizen science observations. One challenge is that in many countries such data is not routinely available. Another key challenge is to extract interpretable information and knowledge that allows for the quantification and monitoring of progress against the UNs Sustainable Development Goals, enabling evidence-based decision making for policy makers and the industry.
A global ecosystem for Environmental Intelligence, has the potential to put us on a path toward a sustainable future. This will require action from citizens, governments, the private sector and intergovernmental organisations to collect and share data, process data and create analytical insights and knowledge. It requires more than the mere collection of data; it requires data fusion, modelling, analysis over time and space, understanding of the interdependencies, the correlations and tipping points as well as making timely predictions and interventions.
Below, is a non exhaustive megalist of relevant datasets and databases that cover multiple ecosystems and spatiotemporal scales.


Link: https://github.com/rockita/Environmental_Intelligence
The following users thanked this post: Alok Pandey

3
Hydrological sciences / Using R in Hydrology - EGU2018 Short Course
« on: April 22, 2018, 02:37:21 PM »
This was a short course conducted during EGU this year. The course was divided into six workflows as follows:


Introduction to the short course - Louise Slater
  • Accessing hydrological data using web APIs (a demo of the rnrfa package) - Claudia Vitolo
  • Processing, modelling and visualising hydrological data in R (tidyverse; piping, mapping and nesting) - Alexander Hurley
  • Extracting netCDF climate data for hydrological analyses (reading and visualising gridded data) - Louise Slater
  • Hydrological modelling and teaching modelling (airGR and airGRteaching) - Guillaume Thirel
  • Typical hydrological tasks in R (List columns, Leaflet and coordinate transformation, Open Street Maps) - Tobias Gauster

Please follow the github link to access the necessary materials: https://github.com/hydrosoc/rhydro_EGU18/



The following users thanked this post: Alok Pandey

4
Programming / Hydrological Models in an R Package (airGR)
« on: November 20, 2017, 05:27:06 PM »
airGR is a package which brings into the R software the hydrological modelling tools used and developed at the Catchment Hydrology Research Group of Irstea (France), including the GR rainfall-runoff models and a snowmelt and accumulation model, CemaNeige. The airGR package has been designed to fulfill two major requirements: to facilitate the use by non-expert users and to allow flexibility regarding the addition of external criteria, models or calibration algorithms.
https://odelaigue.github.io/airGR/
https://cran.r-project.org/web/packages/airGR/airGR.pdf
The following users thanked this post: Alok Pandey

5
Announcements / VATI (Value Added information in real TIme)
« on: November 20, 2017, 03:23:35 PM »
Dr Sat Kumar Tomer and his Research Group (Aapah Innovations) have introduced an Interesting application:
VATI [Visualization of the data in near real-time]...


http://www.aapahinnovations.com/vati/
http://www.aapahinnovations.com/


VATI (Value Added information in real TIme) is an application to visualize value-added information in real time. The value-added products/information is estimated by processing the data from multiple satellites.
Free edition includes:
(i) Visualization of the data in near real-time.
(ii) Chose any one geographical area.
(iii) Downloading of maps in png format.
You can access the VATI by clicking here. If you do not have the credentials for VATI, you can request by sending an email to vati@aapahinnovations.com.


The following users thanked this post: Alok Pandey

6
Study material / MathWorks seminar-Materials
« on: November 17, 2017, 12:05:08 PM »
Dear all

FYI

We had a MathWorks seminar event on Advanced Programming Techniques in MATLAB at NITK recently. I am attaching the presentation and demo codes here
The following users thanked this post: Alok Pandey

7
Data / Download FREE LULC Data for INDIA
« on: November 06, 2017, 12:20:34 PM »
Hello Everyone,

This is a website from which you can download free Land Use Land Cover data for the Indian region. It is based on a recent paper by P.S.Roy et al.

https://daac.ornl.gov/VEGETATION/guides/Decadal_LULC_India.html

Best,
Neha
The following users thanked this post: Alok Pandey

8
Daily temperature values are generally computed as the average of the daily minimum and maximum observations, which can lead to biases in the estimation of daily averaged values. This study examines the impacts of these biases on the calculation of climatology and trends in temperature extremes at 409 sites in North America with at least 25 years of complete hourly records. Our results show that the calculation of daily temperature based on the average of minimum and maximum daily readings leads to an overestimation of the daily values of ~10+% when focusing on extremes and values above (below) high (low) thresholds. Moreover, the effects of the data processing method on trend estimation are generally small, even though the use of the daily minimum and maximum readings reduces the power of trend detection (~5-10% fewer trends detected in comparison with the reference data).


Reference: http://www.sciencedirect.com/science/article/pii/S0169809517300765
The following users thanked this post: Alok Pandey

9
Due to inherent bias the climate model simulated precipitation and temperature cannot be used to drive a hydrological model without pre-processing e statistical downscaling. This often consists of reducing the bias in the climate model simulations (bias correction) and/or transformation of the observed data in order to match the projected changes (delta change). The validation of the statistical downscaling methods is typically limited to the scale for which the transformation was calibrated and the driving variables (precipitation and temperature) of the hydrological model. The paper introduces an R package ”musica” which provides ready to use tools for routine validation of statistical downscaling methods at multiple time scales as well as several advanced methods for statistical downscaling. The musica package is used to validate simulated runoff. It is shown that using conventional methods for downscaling of precipitation and temperature often leads to substantial biases in simulated runoff at all time scales.


LINK: https://cran.r-project.org/web/packages/musica/musica.pdf
The following users thanked this post: Alok Pandey

10
Machine Learning is an algorithm that can learn from data without relying on rules-based programming.Data driven models ( Statistical Modelling ) is formalization of relationships between variables in the form of mathematical equations.

Nowadays, many, if not most, of the time series characteristics of hydrology and water resources systems are nonstationary. Therefore, it is necessary to use methods that can model the nonstationary behaviors of environmental variables to optimize water systems . This implies that using classical statistics, which assume that the time series are stationary, are not suitable.

The attachment which i have made below is an excellent article on why we need Machine learning approach in Water Resource Systems
The following users thanked this post: Alok Pandey

11
Post your question/information / Re: Cauvery (Kaveri) River Water Dispute
« on: October 07, 2016, 01:54:07 PM »
Hi Alok,

             In the past it was a hydrological problem, but now it has been turned into social, cultural even legal problem.  Some times, I was thinking, instead of dividing the state with respect to language or other criteria, we should have divided the states with respect to hydrological basis. 

Thanks for sharing.
The following users thanked this post: Alok Pandey

13
"Groundwater Assessment and Modelling"

Paperback: 332 pages
Publisher: CreateSpace Independent Publishing Platform (March 31, 2015)
Language: English
ISBN-10: 1511520493
ISBN-13: 978-1511520492
Product Dimensions: 7 x 0.8 x 10 inches
Shipping Weight: 1.6 pounds
Author: C. P. Kumar, Scientist, National Institute of Hydrology, Roorkee - 247667 (Uttarakhand), India

(Review published in Journal of Geological Society of India, Volume 88, July 2016)

A few decades back groundwater used to be treated as an issue of civil engineering rather than science. Groundwater development was confined mainly to drinking water wells, like dug wells, hand pump fitted tube wells. Of course, use of step wells or other similar structures to meet water needs has been age old tradition in arid areas. There was no worry about its availability as the draft was minimal and the resource is annually replenishable. Limitations of the resource and its exploitation started coming to the fore following acceleration of its development since seventies and adverse fallouts of its uncontrolled development like dipping water levels, drying of wells, falling well yield, polluted well waters etc. Further, with passage of time increased knowledge of subsurface geology and hydrology also led to the realization that groundwater development calls for scientific planning and management. Groundwater potentials depend upon aquifer distribution and recharge, and also groundwater flow characteristics. Flow equations involve complex mathematical computations. In this context the key issues are resource assessment and estimation, prediction of impact of its development, and alternative development options or scenarios. Hence in the last three decades or so mathematical modeling has come up as indispensable tool in groundwater management in the country. It clearly describes groundwater system and flow dynamics. C.P. Kumar, a scientist of repute in National Institute of Hydrology and with wide experience in the field, has brought out this highly valuable and educative lecture series as a Treatise on ‘Groundwater Assessment and Modelling’ for the benefit of young hydrogeologists and water managers. Kumar enlightens the students of groundwater science in this emerging knowledge field for disciplined development of this valuable resource of the country which is otherwise susceptible to manifold problems.

Keeping in mind the targeted readership the first six chapters of the book are devoted to the basics of groundwater hydrology like water resources potential of India, water balance analysis, management of aquifer recharge, assessment of groundwater potential, groundwater data requirement and analysis. These chapters no doubt serve as valuable introduction for the students leading to the main theme “modeling”. He has covered almost every aspect of groundwater assessment from water table contouring to GIS, Krieging, introducing modern techniques of resource investigation, analysis and assessment.

The objective being clear, the author has smoothly sailed through the subsequent chapters 7-17, focusing on all relevant aspects of groundwater modelling, No doubt these chapters are sequenced in a step wise approach which helps the students of modeling enormously. The ultimate focus is ofcourse on groundwater modeling, and the treatment of the book addresses all its nuances like data requirements, basic guidelines, model limitations, selection of groundwater modelling software, pitfalls and sensitivities, and common errors in modeling exercises. In the next chapters the author has given an overview of groundwater models, details the process of construction of a modular threedimensional groundwater model, pitfalls and sensitivities of groundwater modelling, and modelling of unsaturated flow, so very important to understand infiltration and recharge processes. Kumar has also presented several examples of groundwater models like sea water intrusion, soil moisture movement etc. Both being important from groundwater management point of view, have been dwelt upon in fair details. I have not seen any other book on modeling to delve so deep into the subject, but written so concisely and in so simple and systematic approach.

The author is also alive to the day’s burning issue adding a valuable chapter on climate change and global warming narrating the modalities and processes of climate modelling with focus on its impact on groundwater nicely depicted in the flow-chart (p.291). He has covered all aspects of modelling with due care. Even important contributions of other scientists have not gone unreported, followed by priceless reference list at the end of each chapter for further study.

Groundtruthing is an important step, – a must in modelling exercises. All should have knowledge of data types needed to get the desired results and have a clear understanding of the ultimate results of modelling or product to be expected. One has to calibrate the model at every stage matching with field data.

Keeping an eye on some major groundwater management problems raging in the country, a few live models (NIH, CGWB sponsored) might be added in the book, such as conjunctive use in canal commands (Indira Gandhi Nahar Project), groundwater flow modelling in typical fractured rocks (basement crystallines, Deccan traps), pumping cycles and optimal rates in a well field vis a vis upconing of saline water (Palla Well field) etc. A subject index to help the readers will be a welcome addition. These are mere suggestions which may enrich the volume.

I have come across very few student-friendly books such as this one so systematically sequenced in steps and approach. This is a book mooted to furnish a sound foundation to modelling exercises in varied hydrogeological problems and their solutions. The book will be welcomed by all concerned with country’s groundwater resources management. I am sure Kumar’s book will be successful in spurring more and more application of modelling in groundwater studies in India. Such a book has been awaited for long. C.P. Kumar, the author of this ‘Treatise’ does deserve encomiums.

SUBHAJYOTI DAS
Bengaluru
E-mail: subhajyoti_das@hotmail.com

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

The above book (Paperback and Kindle formats) can be purchased online at the following websites.

(1) CreateSpace:

https://www.createspace.com/5406100  (Paperback = US$ 48.32)

(2) Amazon:

http://www.amazon.com/Groundwater-Assessment-Modelling-Mr-Kumar/dp/1511520493 (Paperback = US$ 48.32)

http://www.amazon.com/dp/B00VGMZM7W (Kindle Edition = US$ 15.12)

(3) Amazon India:

http://www.amazon.in/Groundwater-Assessment-Modelling-C-Kumar/dp/1511520493 (Paperback = Indian Rs. 3059)

http://www.amazon.in/gp/product/B00VGMZM7W (Kindle Edition = Indian Rs. 1006)

(4) LAMBERT Academic Publishing

https://www.morebooks.de/store/gb/book/groundwater-assessment-and-modelling/isbn/978-3-659-69157-7 (Paperback = 79.90 €)
The following users thanked this post: Alok Pandey

14
Programming / Land Use Change Modelling in R : lulcc Package
« on: July 12, 2016, 10:11:50 PM »
The lulcc package is an open and extensible framework for land use change modelling in R.
The following link is to the manual of the package:
https://cran.r-project.org/web/packages/lulcc/lulcc.pdf

The following users thanked this post: Alok Pandey

15
Post your question/information / Re: KS test to fit pdf
« on: June 22, 2016, 11:53:51 AM »
The test statistic of KS test is: Maximum absolute distance between two CDFs.

Suppose you want to check if a particular dataset (say X) is following a certain distribution (say exponential distribution, https://en.wikipedia.org/wiki/Exponential_distribution). Now, intuition suggests that the CDF obtained empirically from data (https://en.wikipedia.org/wiki/Empirical_distribution_function) aligns closely with CDF obtained by fitting X with exponential distribution. Now for the second part, you need exponential distribution CDF equation which will be of form F(x|lambda), where lambda is the parameter of distribution. Here you need sort of a representative of X to be substituted in aforesaid CDF equation. That will be the parameter 'lambda'. So, you substitute lambda and provide x values to get corresponding CDF values.

Since you are comparing 13 distributions, you first need empirical CDF and 13 CDFs obtained by above procedure. Then you compare each of the 13 CDFs with empirical CDF using KS test.

In case of matlab, kstest is the command used for this purpose (http://in.mathworks.com/help/stats/kstest.html). If you go through the help file, there is an example where you can directly provide two CDFs in syntax and get test result.

I am providing the example from help file below:

Specify the Hypothesized Distribution Using a Two-Column Matrix
Load the sample data. Create a vector containing the first column of the students' exam grades data.
Code: [Select]
load examgrades;
x = grades(:,1);
Specify the hypothesized distribution as a two-column matrix. Column 1 contains the data vector x. Column 2 contains cdf values evaluated at each value in x for a hypothesized Student's t distribution with a location parameter of 75, a scale parameter of 10, and one degree of freedom.
Code: [Select]
test_cdf = [x,cdf('tlocationscale',x,75,10,1)];Test if the data are from the hypothesized distribution.
Code: [Select]
h = kstest(x,'CDF',test_cdf)
h =
1
The returned value of h = 1 indicates that kstest rejects the null hypothesis at the default 5% significance level.
The following users thanked this post: Alok Pandey

Pages: [1] 2