Indian Forum for Water Adroit

Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Messages - Pankaj Dey

Pages: 1 ... 4 5 [6] 7 8 9
Announcements / Introduction to Monte Carlo Markov Chain Methods.
« on: August 28, 2017, 06:44:38 PM »
:A very useful result in probability theory as applied to the real world is the law of large numbers. It says that the sample mean of iid observations converges to the population mean in some sense. The CLT is a refinement of this. About half a century ago this method was extended to Markov chains and a new tool known as MCMC was born. In this talk, we shall outline this method with an application.

 Speaker:    Prof Krishna B Athreya
             Department of Statistics
             Iowa State University, USA

Title:      Introduction to Monte Carlo Markov Chain Methods.
Venue:      EEB308 (old PE303)
Date:       Friday, Sep 1, 2017
Time:       4 pm - 5 pm.    (Tea/Coffee at 3:45 pm)

Due to inherent bias the climate model simulated precipitation and temperature cannot be used to drive a hydrological model without pre-processing e statistical downscaling. This often consists of reducing the bias in the climate model simulations (bias correction) and/or transformation of the observed data in order to match the projected changes (delta change). The validation of the statistical downscaling methods is typically limited to the scale for which the transformation was calibrated and the driving variables (precipitation and temperature) of the hydrological model. The paper introduces an R package ”musica” which provides ready to use tools for routine validation of statistical downscaling methods at multiple time scales as well as several advanced methods for statistical downscaling. The musica package is used to validate simulated runoff. It is shown that using conventional methods for downscaling of precipitation and temperature often leads to substantial biases in simulated runoff at all time scales.


Data / Relevant Datasets and their sources
« on: August 16, 2017, 05:42:38 PM »
Please find the attached document for datasets and their online links.

To read the "State of the Climate 2016" by American Meteorological Society, follow the link:

Thank you,

« on: August 12, 2017, 06:42:51 PM »
The main focus of RStoolbox is to provide a set of high-level remote sensing tools for various classification tasks. This includes unsupervised and supervised classification with different classifiers, fractional cover analysis and a spectral angle mapper. Furthermore, several spectral transformations like vegetation indices, principal component analysis or tasseled cap transformation are available as well.
Besides that, we provide a set of data import and pre-processing functions. These include reading and tidying Landsat meta-data, importing ENVI spectral libraries, histogram matching, automatic image co-registration, topographic illumination correction and so on.
Last but not least, RStoolbox ships with two functions dedicated to plotting remote sensing data (*raster* objects) with *ggplot2* including RGB color compositing with various contrast stretching options.
RStoolbox is built on top of the *raster* package. To improve performance some functions use embedded C++ code via the *Rcpp* package.Moreover, most functions have built-in support for parallel processing, which is activated by running raster::beginCluster() beforehand.


Study material / Lattice: Multivariate Data Visualization with R (Book)
« on: August 12, 2017, 06:04:34 PM »
Lattice brings the proven design of Trellis graphics (originally developed for S by William S. Cleveland and colleagues at Bell Labs) to R, considerably expanding its capabilities in the process. Lattice is a powerful and elegant high level data visualization system that is sufficient for most everyday graphics needs, yet flexible enough to be easily extended to handle demands of cutting edge research. Written by the author of the lattice system, this book describes it in considerable depth, beginning with the essentials and systematically delving into specific low levels details as necessary. No prior experience with lattice is required to read the book, although basic familiarity with R is assumed.

The book contains close to150 figures produced with lattice. Many of the examples emphasize principles of good graphical design; almost all use real data sets that are publicly available in various R packages. All code and figures in the book are also available online, along with supplementary material covering more advanced topics.

Deepayan Sarkar won the 2004 John M. Chambers Statistical Software Award for writing lattice while he was a graduate student in Statistics at the University of Wisconsin-Madison. He is currently doing postdoctoral research in the Computational Biology program at the Fred Hutchinson Cancer Research Center, a member of the R Core Team, and an active participant on the R mailing lists.


Programming / raster: Geographic Data Analysis and Modeling
« on: August 12, 2017, 05:51:46 PM »
Reading, writing, manipulating, analyzing and modeling of gridded spatial data. The package implements basic and high-level functions. Processing of very large files is supported.


Programming / rworldmap: A New R package for Mapping Global Data
« on: August 12, 2017, 05:48:30 PM »
rworldmap is a relatively new package available on CRAN for the mapping and visualisation of global data. The vision is to make the display of global data easier, to facilitate understanding and communication. The initial focus is on data referenced by country or grid due to the frequency of use of such data in global assessments. Tools to link data referenced by country (either name or code) to a map, and then to display the map are provided as are functions to map global gridded data. Country and gridded functions accept the same arguments to specify the nature of categories and colour and how legends are formatted. This package builds on the functionality of existing packages, particularly sp, maptools and fields. Example code is provided to produce maps, to link with the packages classInt, RColorBrewer and ncdf, and to plot examples of publicly available country and gridded data.


Data / MODIS GPP/NPP Project (MOD17)
« on: August 10, 2017, 06:02:57 PM »
The goal of the MOD17 MODIS project is to provide continuous estimates of Gross/Net Primary Production (GPP/NPP) across Earth’s entire vegetated land surface. MOD17 GPP/NPP outputs are useful for natural resource and land management, global carbon cycle analysis, ecosystem status assessment, and environmental change monitoring. MOD17 is part of the NASA Earth Observation System (EOS) program and is the first satellite-driven dataset to monitor vegetation productivity on a global scale.


Data / Global SPEI database
« on: August 10, 2017, 05:59:46 PM »
The Global SPEI database, SPEIbase, offers long-time, robust information about drought conditions at the global scale, with a 0.5 degrees spatial resolution and a monthly time resolution. It has a multi-scale character, providing SPEI time-scales between 1 and 48 months. Currently it covers the period between January 1901 and December 2015.


Programming / R Package: SPEI
« on: August 10, 2017, 05:58:31 PM »
A set of functions for computing potential evapotranspiration and several widely used drought indices including the Standardized Precipitation-Evapotranspiration Index (SPEI).


Announcements / ‘Panta Rhei’ and its relationship with uncertainty
« on: August 05, 2017, 05:26:42 PM »
A talk by Prof. Demetris Koutsoyiannis at 10th WORLD CONGRESS on Water Resources and Environment.

Slides are attached.

Dealing with complex and geologically realistic modeling of subsurface systems requires detailed spatial datasets. Such a big data can be usually provided through an image. Despite various developments, the training image based techniques are still not well-designed for modeling multiscale and complex structures. Pixel-based methods honor the conditioning point data with poor reproduction of large-scale features, while some other techniques, termed pattern-based, represent a superior reproduction of long-range and complex structures and being biased around the conditioning data. In this paper, a new look at the geostatistical modeling using a hybrid pattern-pixel-based simulation (HYPPS) is proposed, wherein the pixel- and pattern-based techniques are used simultaneously. A perfect reproduction of the conditioning point data is achieved using the proposed HYPPS method. The algorithm is developed for single and multivariate simulations. This method is applied on different 2D/3D categorical data and the results show significant improvement over the previous techniques.


Nonstationary extreme value analysis (NEVA) can improve the statistical representation of observed flood peak distributions compared to stationary (ST) analysis, but management of flood risk relies on predictions of out-of-sample distributions for which NEVA has not been comprehensively evaluated. In this study, we apply split-sample testing to 1250 annual maximum discharge records in the United States and compare the predictive capabilities of NEVA relative to ST extreme value analysis using a log-Pearson Type III (LPIII) distribution. The parameters of the LPIII distribution in the ST and nonstationary (NS) models are estimated from the first half of each record using Bayesian inference. The second half of each record is reserved to evaluate the predictions under the ST and NS models. The NS model is applied for prediction by (1) extrapolating the trend of the NS model parameters throughout the evaluation period and (2) using the NS model parameter values at the end of the fitting period to predict with an updated ST model (uST). Our analysis shows that the ST predictions are preferred, overall. NS model parameter extrapolation is rarely preferred. However, if fitting period discharges are influenced by physical changes in the watershed, for example from anthropogenic activity, the uST model is strongly preferred relative to ST and NS predictions. The uST model is therefore recommended for evaluation of current flood risk in watersheds that have undergone physical changes. Supporting information includes a MATLAB program that estimates the (ST/NS/uST) LPIII parameters from annual peak discharge data through Bayesian inference.


This study quantifies mean annual and monthly fluxes of Earth’s water cycle over continents and ocean basins during the first decade of the millennium. To the extent possible, the flux estimates are based on satellite measurements first and data-integrating models second. A careful accounting of uncertainty in the estimates is included. It is applied within a routine that enforces multiple water and energy budget constraints simultaneously in a variational framework in order to produce objectively determined optimized flux estimates. In the majority of cases, the observed annual surface and atmospheric water budgets over the continents and oceans close with much less than 10% residual. Observed residuals and optimized uncertainty estimates are considerably larger for monthly surface and atmospheric water budget closure, often nearing or exceeding 20% in North America, Eurasia, Australia and neighboring islands, and the Arctic and South Atlantic Oceans. The residuals in South America and Africa tend to be smaller, possibly because cold land processes are negligible. Fluxes were poorly observed over the Arctic Ocean, certain seas, Antarctica, and the Australasian and Indonesian islands, leading to reliance on atmospheric analysis estimates. Many of the satellite systems that contributed data have been or will soon be lost or replaced. Models that integrate ground-based and remote observations will be critical for ameliorating gaps and discontinuities in the data records caused by these transitions. Continued development of such models is essential for maximizing the value of the observations. Next-generation observing systems are the best hope for significantly improving global water budget accounting.


Pages: 1 ... 4 5 [6] 7 8 9