Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Messages - Pankaj Dey

Pages: [1] 2 3 ... 29
Hydrological sciences / Where is the bottom of a watershed ?
« on: February 20, 2020, 06:57:55 PM »
Watersheds have served as one of our most basic units of organization in hydrology for over 300 years (Dooge, 1988; McDonnell, 2017; Perrault, 1674). With growing interest in groundwater‐surface water interactions and subsurface flow paths, hydrologists are increasingly looking deeper. But the dialog between surface water hydrologists and groundwater hydrologists is still embryonic and many basic questions are yet to be posed, let alone answered. One key question is: where is the bottom of a watershed? Knowing where to draw the bottom boundary has not yet been fully addressed in the literature, and how to define the watershed “bottom” is a fraught question. There is large variability across physical and conceptual models regarding how to implement a watershed bottom, and what counts as ‘deep’ varies markedly in different communities. In this commentary, we seek to initiate a dialog on existing approaches to defining the bottom of the watershed. We briefly review the current literature describing how different communities typically frame the answer of just how deep we should look and identify situations where ‘deep’ flow paths are key to developing realistic conceptual models of watershed systems. We then review the common conceptual approaches used to delineate the watershed lower boundary. Finally, we highlight opportunities to trigger this potential research area at the interface of catchment hydrology and hydrogeology.

Spatially and temporally explicit canopy water content (CWC) data are important for monitoring vegetation status, and constitute essential information for studying ecosystem-climate interactions. Despite many efforts there is currently no operational CWC product available to users. In the context of the Satellite Application Facility for Land Surface Analysis (LSA-SAF), we have developed an algorithm to produce a global dataset of CWC based on data from the Advanced Very High Resolution Radiometer (AVHRR) sensor on board Meteorological–Operational (MetOp) satellites forming the EUMETSAT Polar System (EPS). CWC reflects the water conditions at the leaf level and information related to canopy structure. An accuracy assessment of the EPS/AVHRR CWC indicated a close agreement with multi-temporal ground data from SMAPVEX16 in Canada and Dahra in Senegal, with RMSE of 0.19 kg m−2 and 0.078 kg m−2 respectively. Particularly, when the Normalized Difference Infrared Index (NDII) was included the algorithm was better constrained in semi-arid regions and saturation effects were mitigated in dense canopies. An analysis of spatial scale effects shows the mean bias error in CWC retrievals remains below 0.001 kg m−2 when spatial resolutions ranging from 20 m to 1 km are considered. The present study further evaluates the consistency of the LSA-SAF product with respect to the Simplified Level 2 Product Prototype Processor (SL2P) product, and demonstrates its applicability at different spatio-temporal resolutions using optical data from MSI/Sentinel-2 and MODIS/Terra & Aqua. Results suggest that the LSA-SAF EPS/AVHRR algorithm is robust, agrees with the CWC dynamics observed in available ground data, and is also applicable to data from other sensors. We conclude that the EPS/AVHRR CWC product is a promising tool for monitoring vegetation water status at regional and global scales.

Using a framework of partial differential equation-constrained optimization, we demonstrate that multiple constitutive relations can be extracted simultaneously from a small set of images of pattern formation. Examples include state-dependent properties in phase-field models, such as the diffusivity, kinetic prefactor, free energy, and direct correlation function, given only the general form of the Cahn-Hilliard equation, Allen-Cahn equation, or dynamical density functional theory (phase-field crystal model). Constraints can be added based on physical arguments to accelerate convergence and avoid spurious results. Reconstruction of the free energy functional, which contains nonlinear dependence on the state variable and differential or convolutional operators, opens the possibility of learning nonequilibrium thermodynamics from only a few snapshots of the dynamics.

The Integrated Multi-satellitE Retrievals for Global Precipitation Measurement (IMERG) produces the latest generation of satellite precipitation estimates and has been widely used since its release in 2014. IMERG V06 provides global rainfall and snowfall data beginning from 2000. This study comprehensively analyzes the quality of the IMERG product at daily and hourly scales in China from 2000 to 2018 with special attention paid to snowfall estimates. The performance of IMERG is compared with nine satellite and reanalysis products (TRMM 3B42, CMORPH, PERSIANN-CDR, GSMaP, CHIRPS, SM2RAIN, ERA5, ERA-Interim, and MERRA2). Results show that the IMERG product outperforms other datasets, except the Global Satellite Mapping of Precipitation (GSMaP), which uses daily-scale station data to adjust satellite precipitation estimates. The monthly-scale station data adjustment used by IMERG naturally has a limited impact on estimates of precipitation occurrence and intensity at the daily and hourly time scales. The quality of IMERG has improved over time, attributed to the increasing number of passive microwave samples. SM2RAIN, ERA5, and MERRA2 also exhibit increasing accuracy with time that may cause variable performance in climatological studies. Even relying on monthly station data adjustments, IMERG shows good performance in both accuracy metrics at hourly time scales and the representation of diurnal cycles. In contrast, although ERA5 is acceptable at the daily scale, it degrades at the hourly scale due to the limitation in reproducing the peak time, magnitude and variation of diurnal cycles. IMERG underestimates snowfall compared with gauge and reanalysis data. The triple collocation analysis suggests that IMERG snowfall is worse than reanalysis and gauge data, which partly results in the degraded quality of IMERG in cold climates. This study demonstrates new findings on the uncertainties of various precipitation products and identifies potential directions for algorithm improvement. The results of this study will be useful for both developers and users of satellite rainfall products.

Hydrological sciences / The ECOSTRESS spectral library version 1.0
« on: February 14, 2020, 04:57:52 PM »
In June 2018, the ECOsystem Spaceborne Thermal Radiometer Experiment on Space Station (ECOSTRESS) mission was launched to measure plant temperatures and better understand how they respond to stress. While the ECOSTRESS mission delivers imagery with ~60 m spatial resolution, it is often useful to have spectra at the leaf level in order to explain variability seen at the pixel level. As it was originally titled, the Advanced Spaceborne Thermal Emission Reflection Radiometer (ASTER) spectral library version 2.0 has been expanded to support ECOSTRESS studies by including major additions of laboratory measured vegetation and non-photosynthetic vegetation (NPV) spectra. The library now contains 541 leaf visible shortwave infrared (VIS/SWIR) spectra, 472 leaf thermal infrared (TIR) spectra, and 51 NPV VIS/SWIR and TIR spectra. Previously, the library primarily contained VSWIR and TIR laboratory spectra of minerals, rocks, and man-made materials. This new library, containing over 3000 spectra, was renamed the ECOSTRESS spectral library version 1.0 and is publicly available ( It should be noted that as with the prior versions of the library, the VSWIR and TIR measurements were made with separate instruments with different calibration sources. Care should be taken when combining the data into a seamless spectrum to cover the entire spectral range. The ECOSTRESS spectral library provides a comprehensive collection of natural and man-made laboratory collected spectra covering the wavelength range of 0.35–15.4 μm.

Most soil hydraulic information used in Earth System Models (ESMs) is derived from pedo-transfer functions that use easy-to-measure soil attributes to estimate hydraulic parameters. This parameterization relies heavily on soil texture, but overlooks the critical role of soil structure originated by soil biophysical activity. Soil structure omission is pervasive also in sampling and measurement methods used to train pedotransfer functions. Here we show how systematic inclusion of salient soil structural features of biophysical origin affect local and global hydrologic and climatic responses. Locally, including soil structure in models significantly alters infiltration-runoff partitioning and recharge in wet and vegetated regions. Globally, the coarse spatial resolution of ESMs and their inability to simulate intense and short rainfall events mask effects of soil structure on surface fluxes and climate. Results suggest that although soil structure affects local hydrologic response, its implications on global-scale climate remains elusive in current ESMs.

The amount of impervious surface is an important indicator in the monitoring of the intensity of human activity and environmental change. The use of remote sensing techniques is the only means of accurately carrying out global mapping of impervious surfaces covering large areas. Optical imagery can capture surface reflectance characteristics, while synthetic aperture radar (SAR) images can be used to provide information on the structure and dielectric properties of surface materials. In addition, night-time light (NTL) imagery can detect the intensity of human activity and thus provide important a priori probabilities of the occurrence of impervious surfaces. In this study, we aimed to generate an accurate global impervious surface map at a resolution of 30-m for 2015 by combining Landsat-8 OLI optical images, Sentinel-1 SAR images and VIIRS NTL images based on the Google Earth Engine (GEE) platform. First, the global impervious and non-impervious training samples were automatically derived by combining the GlobeLand30 land-cover product with VIIRS NTL and MODIS enhanced vegetation index (EVI) imagery. Then, based on global training samples and multi-source and multi-temporal imagery, a random forest classifier was trained and used to generate corresponding impervious surface maps for each 5°×5° cell of a geographical grid. Finally, a global impervious surface map, produced by mosaicking numerous 5°×5° regional maps, was validated by interpretation samples and then compared with three existing impervious products (GlobeLand30, FROM_GLC and NUACI). The results indicated that the global impervious surface map produced using the proposed multi-source, multi-temporal random forest classification (MSMT_RF) method was the most accurate of the maps, having an overall accuracy of 96.6 % and kappa coefficient of 0.903 as against 92.5 % and 0.769 for FROM_GLC, 91.1 % and 0.717 for GlobeLand30, and 87.43 % and 0.585 for NUACI. Therefore, it is concluded that a global 30-m impervious surface map can accurately and efficiently be generated by the proposed MSMT_RF method based on the GEE platform. The global impervious surface map generated in this paper are available at (Zhang et al., 2019).

Recent advances in L-band passive microwave remote sensing provide an unprecedented opportunity to monitor soil moisture at ~40 km spatial resolution around the globe. Nevertheless, retrieval of the accurate high spatial resolution soil moisture maps that are required to satisfy hydro-meteorological and agricultural applications remains a challenge. Currently, a variety of downscaling, otherwise known as disaggregation techniques have been proposed as the solution to disaggregate the coarse passive microwave soil moisture into high-to-medium resolutions. These techniques take advantage of the strengths of both the passive microwave observations of soil moisture having low spatial resolution and the spatially detailed information on land surface features that either influence or represent soil moisture variability. However, such techniques have typically been developed and tested individually under differing weather and climate conditions, meaning that there is no clear guidance on which technique performs the best. Consequently, this paper presents a quantitative assessment of the existing radar-, optical-, radiometer-, and oversampling-based downscaling techniques using a singular extensive data set collected specifically for that purpose, being the Soil Moisture Active Passive Experiment (SMAPEx)-4 and -5 airborne field campaigns, and the OzNet in situ stations, to determine the relative strengths and weaknesses of their performances. The oversampling-based soil moisture product best captured the temporal and spatial variability of the reference soil moisture overall, though the radar-based products had a better temporal agreement with airborne soil moisture during the short SMAPEx-4 period. Moreover, the difference between temporal analysis of products against in situ and airborne soil moisture reference data sets pointed to the fact that relying on in situ measurements alone is not appropriate for validation of spatially enhanced soil moisture maps.

• Benchmarking downscaled product performance against in-situ and airborne data.
• A comprehensive inter-comparison of a variety of downscaled soil moisture products.
• Determine the downscaling methodology yielding the best soil moisture estimation.

Several studies have shown that hydrological models do not perform well when applied to periods with climate conditions that differ from those during model calibration. This has important implications for the application of these models in climate change impact studies. The causes of the low transferability to changed climate conditions have, however, only been investigated in a few studies. Here we revisit a study in Austria that demonstrated the inability of a conceptual model to simulate the discharge response to increases in precipitation and air temperature. The aim of the paper is to shed light on the reasons of these model problems. We set up hypotheses for the possible causes of the mismatch between the observed and simulated changes in discharge and evaluate these using simulations with modifications of the model. In the baseline model, trends of simulated and observed discharge over 1978–2013 differ, on average over all 156 catchments, by 92 ± 50 mm yr−1 per 35 yrs. Accounting for variations in vegetation dynamics, as derived from a satellite-based vegetation index, in the calculation of reference evaporation explains 35 ± 9 mm yr−1 per 35 yrs of the differences between the trends in simulated and observed discharge. Inhomogeneities in the precipitation data, caused by a variable number of stations explain 44 ± 28 mm yr−1 per 35 yrs of this difference. Extending the calibration period from 5 to 25 yrs, varying the objective function by including annually aggregated discharge data, or estimating evaporation with the Penman–Monteith instead of the Blaney–Criddle approach has little influence on the simulated discharge trends. The precipitation data problem highlights the importance of using precipitation data based on a stationary input station network when studying hydrologic changes. The model structure problem with respect to vegetation dynamics is likely relevant for a wide spectrum of regions in a transient climate and has important implications for climate change impact studies.

The concept of time of concentration in the analysis of catchment responses dates back over 150 years to the introduction of the Rational Method. Since then it has been used in a variety of ways in the formulation of both unit hydrograph and distributed catchment models. It is normally discussed The concept of time of concentration in the analysis of catchment responses dates back over 150 years to the introduction of the Rational Method. Since then it has been used in a variety of ways in the formulation of both unit hydrograph and distributed catchment models. It is normally discussed in terms of the velocity of flow of a water particle from the furthest part of a catchment to the outlet. This is also the basis for the definition in the International Glossary of Hydrology. While conceptually simple, this definition is, however, wrong when applied to catchment responses where, in terms of how surface and subsurface flows produce hydrographs, it is more correct to discuss and teach the concept based on celerities and time to equilibrium. While this has been recognized since the 1960s, some recent papers and text remain confused over the definition and use of time of concentration. The paper sets out the history of its use and clarifies its relationship to time to equilibrium but suggests that both terms are not really useful in explaining hydrological responses. An appendix is included that quantifies the differences between the definitions of response times for subsurface and surface flows under simple assumptions that might be useful in teaching.

Physically based distributed hydrologic models require geospatial and time-series data that take considerable time and effort in processing them into model inputs. Tools that automate and speed up input processing facilitate the application of these models. In this study, we developed a set of web-based data services called HydroDS to provide hydrologic data processing ‘software as a service.’ HydroDS provides functions for processing watershed, terrain, canopy, climate, and soil data. The services are accessed through a Python client library that facilitates developing simple but effective data processing workflows with Python. Evaluations of HydroDS by setting up the Utah Energy Balance and TOPNET models for multiple headwater watersheds in the Colorado River basin show that HydroDS reduces the input preparation time compared to manual processing. It also removes the requirements for software installation and maintenance by the user, and the Python workflows enhance reproducibility of hydrologic data processing and tracking of provenance.


•Web-based data services developed for preparation of input data to selected distributed hydrologic models.
•Services are accessed through a Python client library and facilitate development of simple and effective Python workflow scripts.
•Services reduce time for hydrologic model input preparation, enhance reproducibility of hydrologic data processing, and enable tracking of data provenance.



Freely available and reliable meteorological datasets are highly demanded in many scientific and business applications. However, the structure of publicly available databases is often difficult to follow, especially for users who only deal with this kind of dataset on occasion. The “climate” R package aims to fill this gap with an easy-to-use interface for downloading global meteorological data in a fast and consistent way. The package provides access to different sources of in-situ meteorological data, including the Ogimet website, atmospheric vertical sounding gathered at the University of Wyoming’s webpage, and hydrological and meteorological measurements collected by the Institute of Meteorology and Water Management—National Research Institute (i.e., Polish Met Office). This article also provides a quick overview of the key functionalities available within the climate R package, and gives examples of an efficient and tidy workflow of meteorological data within the R based environment. The automation procedures included in the packages allow one to download data in a user-defined time resolution (from hourly to annual), for a user-defined time span, and for a specified group of stations or countries. The package also contains metadata, including a list of available stations, their geospatial information, and measurement descriptions with their units. Finally, the obtained datasets can be processed in R or exported to external tools (e.g., spreadsheets or GIS software).

The PraCTES workshop is a series of demos and hands-on tutorials for practical computing by earth scientists and for earth scientists. The goal of the workshop is to introduce practical computational tools and concepts so that earth scientists can spend more doing science and less time debugging data analysis code, processing large data sets, deciphering model source code, and other frustrating and time-consuming tasks of modern earth science research. We aim to make the workshop accessible and useful to scientists with all levels of programming proficiency and will cover topics ranging from introductory concepts in programming to state-of-the-art software tools for wrangling big data on the Cloud.

Each two-hour session will be highly interactive: instructors will swap between presenting background information on topics (e.g. What is an Earth System Model? How does github work behind the scenes?), demonstrating computing concepts with live demos and leading hands-on code tutorials and exercises, which attendees can follow along with in real-time on their personal laptops. Attendees should feel free to opt-in and opt-out at whichever point in the curriculum they feel is appropriate. Whenever possible, our demos and hands-on tutorials will be agnostic of programming language and earth science subfield, recognizing the many ways in which people engage with computation in earth science. To get the most out of the workshop, bring a laptop and follow along!

Runoff prediction in ungauged catchments is a significant hydrological challenge. The common approach is to calibrate hydrological models against streamflow data from gauged catchments, and then regionalise or transfer parameter values from the gauged calibration to predict runoff in the ungauged catchments. This paper explores the potential for using parameter values from hydrological models calibrated solely against readily available remotely sensed ET (RS‐ET) data to estimate runoff time series. The advantage of this approach is that it does not require observed streamflow data for model calibration and is therefore particularly useful for runoff prediction in poorly gauged or ungauged regions. The modelling experiments are carried out using data from 222 catchments across Australia. The results from the RS‐ET runoff‐free calibration are encouraging, particularly in simulating monthly runoff and mean annual runoff in the wetter catchments. However, results for daily runoff and in the drier regions are relatively poor, and further developments are needed to realise the benefit of direct model calibration against remotely sensed data to predict runoff in ungauged catchments.

Changes in the Earth's climate have been increasingly observed. Assessing the likelihood that each of these changes has been caused by human influence is important for decision making on mitigation and adaptation policy. Because of their large societal and economic impacts, extreme events have garnered much media attention—have they become more frequent and more intense, and if so, why? To answer such questions, extreme event attribution (EEA) tries to estimate extreme event likelihoods under different scenarios. Over the past decade, statistical methods and experimental designs based on numerical models have been developed, tested, and applied. In this article, we review the basic probability schemes, inference techniques, and statistical hypotheses used in EEA. To implement EEA analysis, the climate community relies on the use of large ensembles of climate model runs. We discuss, from a statistical perspective, how extreme value theory could help to deal with the different modeling uncertainties. In terms of interpretation, we stress that causal counterfactual theory offers an elegant framework that clarifies the design of event attributions. Finally, we pinpoint some remaining statistical challenges, including the choice of the appropriate spatio-temporal scales to enhance attribution power, the modeling of concomitant extreme events in a multivariate context, and the coupling of multi-ensemble and observational uncertainties.

Pages: [1] 2 3 ... 29