Indian Forum for Water Adroit

Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Topics - Pankaj Dey

Pages: [1] 2 3 ... 8
Facilitates programmatic access to NASA Soil Moisture Active Passive (SMAP) data with R. It includes functions to search for, acquire, and extract SMAP data.

Link to manual:

Link to github page:

Hydrological sciences / 9 Free Global Land Cover / Land Use Data Sets
« on: June 15, 2018, 07:33:46 PM »
Please follow the link for more details:

Data Visualisation can be defined as representing numbers with shapes – and no matter what these shapes look like (areas, lines, dots), they need to have a color. Sometimes colors just make the shapes visible, sometimes they encode data or categories themselves. We’ll focus mostly on the latter in this article. But we’ll also take a general look at colors and what to consider when choosing them:

Link to the article:

Changing the vegetation cover of the Earth has impacts on the biophysical properties of the surface and ultimately on the local climate. Depending on the specific type of vegetation change and on the background climate, the resulting competing biophysical processes can have a net warming or cooling effect, which can further vary both spatially and seasonally. Due to uncertain climate impacts and the lack of robust observations, biophysical effects are not yet considered in land-based climate policies. Here we present a dataset based on satellite remote sensing observations that provides the potential changes i) of the full surface energy balance, ii) at global scale, and iii) for multiple vegetation transitions, as would now be required for the comprehensive evaluation of land based mitigation plans. We anticipate that this dataset will provide valuable information to benchmark Earth system models, to assess future scenarios of land cover change and to develop the monitoring, reporting and verification guidelines required for the implementation of mitigation plans that account for biophysical land processes.

Link to paper:

Link to data:

The package xplain is designed to help users interpret the results of their statistical analyses.
It does so not in an abstract way as textbooks do. Textbooks do not help the user of a statistical method understand his findings directly. What does a result of 3.14 actually mean? This is often hard to answer with a textbook alone because the book may provide its own examples but cannot refer to the specifics of the user’s case. However, as we all know, we understand things best when they are explained to us with reference to the actual problem we are working on. xplain is made to fill this gap that textbooks (and other learning materials) leave.
The basic idea behind xplain is simple: Package authors or other people interested in explaining statistics provide interpretation information for a statistical method (i.e. an R function) in the format of an XML file. With a simple syntax this interpretation information can reference the results of the user’s call of the explained R function. At runtime, xplain then provides the user with textual interpretation that really relates to his/her case.
Providing xplain interpretation information can be interesting for:R package authors who implement a statistical method
  • statisticians who develop statistical methods themselves
  • college and university teachers who want to make their teaching content more accessible for their students
  • everybody who enjoys teaching and explaining statistics and thinks he/she has something to contribute
  • xplain offers support for interpretation information in different languages and on different levels of difficulty.

link to the web page:

High-resolution information on climatic conditions is essential to many applications in environmental and ecological sciences. Here we present the CHELSA (Climatologies at high resolution for the earth’s land surface areas) data of downscaled model output temperature and precipitation estimates of the ERA-Interim climatic reanalysis to a high resolution of 30 arc sec. The temperature algorithm is based on statistical downscaling of atmospheric temperatures. The precipitation algorithm incorporates orographic predictors including wind fields, valley exposition, and boundary layer height, with a subsequent bias correction. The resulting data consist of a monthly temperature and precipitation climatology for the years 1979–2013. We compare the data derived from the CHELSA algorithm with other standard gridded products and station data from the Global Historical Climate Network. We compare the performance of the new climatologies in species distribution modelling and show that we can increase the accuracy of species range predictions. We further show that CHELSA climatological data has a similar accuracy as other products for temperature, but that its predictions of precipitation patterns are better.

Link to paper:

Link to data:

Observing surface water is essential for ecological and hydrological studies. This paper reviews the current status of detecting, extracting and monitoring surface water using optical remote sensing, especially progress in the last decade. It also discusses the current status and challenges in this field. For example, it was found that pixel unmixing and reconstruction, and spatio‐temporal fusion are two common and low‐cost approaches to enhance surface water monitoring. Remote sensing data have been integrated with in situ river flow to model spatio‐temporal dynamics of surface water. Recent studies have also proved that the river discharge can be estimated using only optical remote sensing imagery. This will be a breakthrough for hydrological studies in ungauged areas. Optical sensors are also easily obscured by clouds and vegetation. This limitation can be reduced by integrating optical data with Synthetic Aperture Radar (SAR) data and Digital Elevation Model (DEM) data. There is increasing demand of monitoring global water dynamics at high resolutions. It is now easy to achieve with the development of big data and cloud computation techniques. Enhanced global or regional water monitoring in the future requires integrated use of multiple sources of remote sensing data.


Hydrologic soil groups (HSGs) are a fundamental component of the USDA curve-number (CN) method for estimation of rainfall runoff; yet these data are not readily available in a format or spatial-resolution suitable for regional- and global-scale modeling applications. We developed a globally consistent, gridded dataset defining HSGs from soil texture, bedrock depth, and groundwater. The resulting data product—HYSOGs250m—represents runoff potential at 250 m spatial resolution. Our analysis indicates that the global distribution of soil is dominated by moderately high runoff potential, followed by moderately low, high, and low runoff potential. Low runoff potential, sandy soils are found primarily in parts of the Sahara and Arabian Deserts. High runoff potential soils occur predominantly within tropical and subtropical regions. No clear pattern could be discerned for moderately low runoff potential soils, as they occur in arid and humid environments and at both high and low elevations. Potential applications of this data include CN-based runoff modeling, flood risk assessment, and as a covariate for biogeographical analysis of vegetation distributions.

Link to paper:

Link to dataset:

Description of dataset: This dataset - HYSOGs250m - represents a globally consistent, gridded dataset of hydrologic soil groups (HSGs) with a geographical resolution of 1/480 decimal degrees, corresponding to a projected resolution of approximately 250-m. These data were developed to support USDA-based curve-number runoff modeling at regional and continental scales. Classification of HSGs was derived from soil texture classes and depth to bedrock provided by the Food and Agriculture Organization soilGrids250m system.


Verifying that a statistically significant result is scientifically meaningful is not only good scientific practice, it is a natural way to control the Type I error rate. Here we introduce a novel extension of the p-value—a second-generation p-value (pδ)–that formally accounts for scientific relevance and leverages this natural Type I Error control. The approach relies on a pre-specified interval null hypothesis that represents the collection of effect sizes that are scientifically uninteresting or are practically null. The second-generation p-value is the proportion of data-supported hypotheses that are also null hypotheses. As such, second-generation p-values indicate when the data are compatible with null hypotheses (pδ = 1), or with alternative hypotheses (pδ = 0), or when the data are inconclusive (0 < pδ < 1). Moreover, second-generation p-values provide a proper scientific adjustment for multiple comparisons and reduce false discovery rates. This is an advance for environments rich in data, where traditional p-value adjustments are needlessly punitive. Second-generation p-values promote transparency, rigor and reproducibility of scientific results by a priori specifying which candidate hypotheses are practically meaningful and by providing a more reliable statistical summary of when the data are compatible with alternative or null hypotheses.

Link to paper:
Advantage over old concept of p-values is shown in a figure attached to the post.

Floods, wildfires, heatwaves and droughts often result from a combination of interacting physical processes across multiple spatial and temporal scales. The combination of processes (climate drivers and hazards) leading to a significant impact is referred to as a ‘compound event’. Traditional risk assessment methods typically only consider one driver and/or hazard at a time, potentially leading to underestimation of risk, as the processes that cause extreme events often interact and are spatially and/or temporally dependent. Here we show how a better understanding of compound events may improve projections of potential high-impact events, and can provide a bridge between climate scientists, engineers, social scientists, impact modellers and decision-makers, who need to work closely together to understand these complex events.


Speaker: Dr. Dennis Helder
Time: 18 May 2018 (Friday) @ 3:00 pm
Venue: DESE Auditorium , IISc
Radiometric calibration of optical remote sensing satellite sensors is a necessary step so that the data acquired by these systems can be placed on an absolute scale of radiance or reflectance. This fundamental step converts the raw digital numbers recorded by the satellite into physical units and thus turns a ‘pretty picture’ into a scientific data set. There are two basic approaches to performing radiometric calibration. The first is to design calibration systems into the instrument itself. Often this is done by incorporating diffuser panels and/or lamps into the instrument. However, this approach adds significant additional cost and weight. Thus, many satellites do not incorporate such onboard systems. The second approach is most often termed ‘vicarious calibration’ and it involves using information acquired from a distance such as information from the earth imagery itself. The South Dakota State University Image Processing Laboratory has specialized in vicarious calibration of remote sensing imagery for over 25 years. In this presentation the fundamental concepts of vicarious calibration will be presented, examples of the surface reflectance method and the pseudo invariant calibration site (PICS) method will be provided, and application of these methods to the Landsat image archive, from 1972 to the present, will be given.
About Speaker:
Dr. Dennis Helder has been involved with the characterization and calibration of space borne and airborne  remote sensing imaging systems for over 25 years. Initial work focused on characterization and removal of radiometric artifacts of the Landsat TM and MSS sensors. More recent work has emphasized development of vicarious radiometric calibration approaches for a variety of optical remote sensing systems as well as on-orbit point spread function estimation. Dr. Helder has served on several NASA and USGS EROS science teams including Landsat 7, Landsat 8, and EO-1. He is currently Associate Dean for Engineering Research and Distinguished Professor of Electrical Engineering at South Dakota State University and is also on detail to USGS EROS as director of the EROS CalVal Center of Excellence.

JASP is statistically inclusive as it offers both frequentist and Bayesian analysis methods. Indeed, the primary motivation for JASP is to make it easier for statistical practitioners to conduct Bayesian analyses. We firmly believe that Bayesian statistics deserves to be applied more often and more widely than it is today, and that there is more to statistical inference than the frequentist p-value. A pragmatist may argue that –irrespective of one’s statistical convictions– it is prudent to report the results from both paradigms; when the results point in the same direction, this bolsters one’s confidence in the conclusions, but when the results are in blatant contradiction, this will weaken one’s confidence.


Youtube video link:

We are recruiting a new postdoc for an exciting position to assess the role of dams in delivering improved food security in developing countries as part of the FutureDams research centre led by the University of Manchester. The selected candidate will use remote sensing, field datasets, and crop models to evaluate impacts of historic and future dam developments on agricultural productivity and livelihoods, and investigate trade-offs between agriculture and other water users (e.g. hydropower, environment). Research activities will focus on three primary case study river basins: the Nile and Volta basins in Africa, and the Salween River basin in Myanmar. The position is available for a period of 3 years starting from summer 2018 or as soon as possible thereafter. Further details and the project application form can be accessed by clicking the buttons below.


Demonstrating the “unit hydrograph” and flow routing processes involving active student participation – a university lecture experiment

 The unit hydrograph (UH) has been one of the most widely employed hydrological modelling techniques to predict rainfall–runoff behaviour of hydrological catchments, and is still used to this day. Its concept is based on the idea that a unit of effective precipitation per time unit (e.g. mm h−1) will always lead to a specific catchment response in runoff. Given its relevance, the UH is an important topic that is addressed in most (engineering) hydrology courses at all academic levels. While the principles of the UH seem to be simple and easy to understand, teaching experiences in the past suggest strong difficulties in students' perception of the UH theory and application. In order to facilitate a deeper understanding of the theory and application of the UH for students, we developed a simple and cheap lecture theatre experiment which involved active student participation. The seating of the students in the lecture theatre represented the hydrological catchment in its size and form. A set of plastic balls, prepared with a piece of magnetic strip to be tacked to any white/black board, each represented a unit amount of effective precipitation. The balls are evenly distributed over the lecture theatre and routed by some given rules down the catchment to the catchment outlet, where the resulting hydrograph is monitored and illustrated at the black/white board. The experiment allowed an illustration of the underlying principles of the UH, including stationarity, linearity, and superposition of the generated runoff and subsequent routing. In addition, some variations of the experimental setup extended the UH concept to demonstrate the impact of elevation, different runoff regimes, and non-uniform precipitation events on the resulting hydrograph. In summary, our own experience in the classroom, a first set of student exams, as well as student feedback and formal evaluation suggest that the integration of such an experiment deepened the learning experience by active participation. The experiment also initialized a more experienced based discussion of the theory and assumptions behind the UH. Finally, the experiment was a welcome break within a 3 h lecture setting, and great fun to prepare and run.


Pages: [1] 2 3 ... 8