Recent Posts

Pages: [1] 2 3 ... 10
1
Interesting information / IMDLIB - A Python Package for handling IMD datasets
« Last post by Pankaj Dey on October 15, 2020, 07:02:12 PM »
IMDLIB is a python package to download and handle binary grided data from India Meteorological Department (IMD). For more information about the IMD datasets, follow the following link: http://imdpune.gov.in/Clim_Pred_LRF_New/Grided_Data_Download.html
Link to tutorial: https://pratiman-91.github.io/2020/10/05/IMDLIB.html
2
Interesting information / FUSE: Framework for Understanding Structural Errors
« Last post by Pankaj Dey on October 11, 2020, 12:29:12 PM »
This is a source code repository for the Framework for Understanding Structural Errors or FUSE. FUSE is modular modelling framework which enables the generation of a myriad of conceptual hydrological models by recombining elements from commonly-used models. Running a hydrological model means making a wide range of decisions, which will influence the simulations in different ways and to different extents. Our goal with FUSE is enable users to be in charge of these decisions, so that they can understand their effects, and thereby, develop and use better models.
FUSE was build from scratch to be modular, it offers several options for each important modelling decision and enables the addition of new modules. In contrast, most traditional hydrological models rely on a single model structure (most processes are simulated by a single set of equations). FUSE modularity makes it easier to i) understand differences between models, ii) run a large ensemble of models, iii) capture the spatial variability of hydrological processes and iv) develop and improve hydrological models in a coordinated fashion across the community.
 New features FUSE initial implementation (FUSE1) is described in Clark et al. (WRR, 2008). The implementation provided here (which will become FUSE2) was created with users in mind and significantly increases the usability and range of applicability of the original version. In particular, it involves 5 main additional features:
 
  • an interface enabling the use of the different FUSE modes (default, calibration, regionalisation),
  • a distributed mode enabling FUSE to run on a grid whilst efficiently managing memory,
  • all the input, output and parameter files are now NetCDF files to improve reproducibility,
  • a calibration mode based on the shuffled complex evolution algorithm (Duan et al., WRR, 1992),
  • a snow module described in Henn et al. (WRR, 2015).
Manual Instructions to compile the code provided in this repository and to run FUSE are provided in FUSE manual.
 License FUSE is distributed under the GNU Public License Version 3. For details see the file LICENSE in the FUSE root directory or visit the online version.


https://github.com/naddor/fuse/tree/develop
3
Abstract. Over past decades, a lot of global land-cover products have been released, however, these is still lack of a global land-cover map with fine classification system and spatial resolution simultaneously. In this study, a novel global 30-m land-cover classification with a fine classification system for the year 2015 (GLC_FCS30-2015) was produced by combining time-series of Landsat imagery and high-quality training data from the GSPECLib (Global Spatial Temporal Spectra Library) on the Google Earth Engine computing platform. First, the global training data from the GSPECLib were developed by applying a series of rigorous filters to the MCD43A4 NBAR and CCI_LC land-cover products. Secondly, a local adaptive random forest model was built for each 5° × 5° geographical tile by using the multi-temporal Landsat spectral and textures features of the corresponding training data, and the GLC_FCS30-2015 land-cover product containing 30 land-cover types was generated for each tile. Lastly, the GLC_FCS30-2015 was validated using three different validation systems (containing different land-cover details) using 44 043 validation samples. The validation results indicated that the GLC_FCS30-2015 achieved an overall accuracy of 82.5 % and a kappa coefficient of 0.784 for the level-0 validation system (9 basic land-cover types), an overall accuracy of 71.4 % and kappa coefficient of 0.686 for the UN-LCCS (United Nations Land Cover Classification System) level-1 system (16 LCCS land-cover types), and an overall accuracy of 68.7 % and kappa coefficient of 0.662 for the UN-LCCS level-2 system (24 fine land-cover types). The comparisons against other land-cover products (CCI_LC, MCD12Q1, FROM_GLC and GlobeLand30) indicated that GLC_FCS30-2015 provides more spatial details than CCI_LC-2015 and MCD12Q1-2015 and a greater diversity of land-cover types than FROM_GLC-2015 and GlobeLand30-2010, and that GLC_FCS30-2015 achieved the best overall accuracy of 82.5% against FROM_GLC-2015 of 59.1 % and GlobeLand30-2010 of 75.9 %. Therefore, it is concluded that the GLC_FCS30-2015 product is the first global land-cover dataset that provides a fine classification system with high classification accuracy at 30 m. The GLC_FCS30-2015 global land-cover products generated in this paper is available at https://doi.org/10.5281/zenodo.3986871 (Liu et al., 2020).
Link: https://essd.copernicus.org/preprints/essd-2020-182/
4
High temporal resolution meteorology and soil physics observations from INCOMPASS land surface stations in India, 2016 to 2018

The dataset contains time series observations of meteorological and soil physics variables logged at one minute time resolution at three Land Surface Stations in India. The three INCOMPASS Land Surface Stations were located at: (1) agricultural land in Southern Karnataka (Berambadi); (2) the University of Agricultural Sciences in Dharwad in northern Karnataka; and (3) a semi-natural grassland at the Indian Institute of Technology in Kanpur (IITK), Uttar Pradesh.

Observations were collected under the Interaction of Convective Organization and Monsoon Precipitation, Atmosphere, Surface and Sea (INCOMPASS) Project between January 2016 and January 2019.

Link to dataset: https://catalogue.ceh.ac.uk/documents/c5e72461-c61f-4800-8bbf-95c85f74c416
5
Interesting information / Effective Computing
« Last post by Pankaj Dey on September 16, 2020, 01:01:10 PM »
Overwhelmed by the world of computing tools you could be using for your research? Mired in messy code that won't evolve with your ideas? This course is for you. Designed for grad students across the College of the Environment, it is a broad, practical introduction to the most important computer things you need to know to keep your research flowing smoothly.
http://faculty.washington.edu/pmacc/Classes/EffCom_2020/index.html
6
Interesting information / Introducing the R Package “biascorrection”
« Last post by Pankaj Dey on September 15, 2020, 05:40:34 PM »
For variety of reasons, we need hydrological models for our short- and long-term predictions and planning.  However, it is no secret that these models always suffer from some degree of bias. This bias can stem from many different and often interacting sources. Some examples are biases in underlying model assumptions, missing processes, model parameters, calibration parameters, and imperfections in input data (Beven and Binley, 1992).
The question of how to use models, given all these uncertainties, has been an active area of research for at least 50 years and will probably remain so for the foreseeable future, but going through that is not the focus of this blog post.
In this post, I explain a technique called bias correction that is frequently used in an attempt to improve model predictions. I also introduce an R package for bias correction that I recently developed; the package is called “biascorrection.” Although most of the examples in this post are about hydrological models, the arguments and the R package might be useful in other disciplines, for example with atmospheric models that have been one of the hotspots of bias correction applications (for example, herehere and here). The reason is that the algorithm follows a series of simple mathematical procedures that can be applied to other questions and research areas.


Link to the blog post: https://waterprogramming.wordpress.com/2020/09/15/introducing-the-r-package-biascorrection/
7
We develop a Bayesian Land Surface Phenology (LSP) model and examine its performance using Enhanced Vegetation Index (EVI) observations derived from the Harmonized Landsat Sentinel-2 (HLS) dataset. Building on previous work, we propose a double logistic function that, once couched within a Bayesian model, yields posterior distributions for all LSP parameters. We assess the efficacy of the Normal, Truncated Normal, and Beta likelihoods to deliver robust LSP parameter estimates. Two case studies are presented and used to explore aspects of the proposed model. The first, conducted over forested pixels within a HLS tile, explores choice of likelihood and space-time varying HLS data availability for long-term average LSP parameter point and uncertainty estimation. The second, conducted on a small area of interest within the HLS tile on an annual time-step, further examines the impact of sample size and choice of likelihood on LSP parameter estimates. Results indicate that while the Truncated Normal and Beta likelihoods are theoretically preferable when the vegetation index is bounded, all three likelihoods performed similarly when the number of index observations is sufficiently large and values are not near the index bounds. Both case studies demonstrate how pixel-level LSP parameter posterior distributions can be used to propagate uncertainty through subsequent analysis. As a companion to this article, we provide an open-source \R package \pkg{rsBayes} and supplementary data and code used to reproduce the analysis results. The proposed model specification and software implementation delivers computationally efficient, statistically robust, and inferentially rich LSP parameter posterior distributions at the pixel-level across massive raster time series datasets.

https://arxiv.org/abs/2009.05203
Link to R Package manual : https://cran.r-project.org/web/packages/rsBayes/rsBayes.pdf
8
Interesting information / Workshop : Uncertainties in data analysis
« Last post by Pankaj Dey on September 04, 2020, 10:42:45 AM »
The 3-day interdisciplinary workshop on "Uncertainties in data analysis" will be held at the Potsdam Institute for Climate Impact Research (PIK), Germany, during 30 Sep and 2 Oct 2020. The workshop will consist of 5 lectures and hands-on tutorials conducted by experts from different applied fields that deal with uncertainties. The sessions are primarily intended for postgraduate, doctoral and post-doctoral researchers. Attendees are invited to contribute to the workshop with presentations of their own research, covering uncertainties or challenging investigations in (palaeo-)climate research.The tutorials will cover more general and some specific topics, such as applied Bayesian statistics, palaeoclimate age uncertainties, ice core uncertainties, nonlinear time series analysis, and how uncertainties in data can be modeled in a theoretical or applied sense.https://eveeno.com/IUCliD
9
Interesting information / varrank: a variable selection appoach
« Last post by Pankaj Dey on August 25, 2020, 11:52:55 AM »
A common challenge encountered when working with high dimensional datasets is that of variable selection. All relevant confounders must be taken into account to allow for unbiased estimation of model parameters, while balancing with the need for parsimony and producing interpretable models. This task is known to be one of the most controversial and difficult tasks in epidemiological analysis.
Variable selection approaches can be categorized into three broad classes: filter-based methods, wrapper-based methods, and embedded methods. They differ in how the methods combine the selection step and the model inference. An appealing filter approach is the minimum redundancy maximum relevance (mRMRe) algorithm. The purpose of this heuristic approach is to select the most relevant variables from a set by penalising according to the amount of redundancy variables share with previously selected variables. In epidemiology, the most frequently used approaches to tackle variable selection based on modeling use goodness-of fit metrics. The paradigm is that important variables for modeling are variables that are causally connected and predictive power is a proxy for causal links. On the other hand, the mRMRe algorithm aims to measure the importance of variables based on a relevance penalized by redundancy measure which makes it appealing for epidemiological modeling.
varrank has a flexible implementation of the mRMRe algorithm which perform variable ranking based on mutual information. The package is particularly suitable for exploring multivariate datasets requiring a holistic analysis. The two main problems that can be addressed by this package are the selection of the most representative variables for modeling a collection of variables of interest, i.e., dimension reduction, and variable ranking with respect to a set of variables of interest.
Publications:  https://arxiv.org/abs/1804.07134
 Software:  https://cran.r-project.org/package=varrank


source: https://www.math.uzh.ch/as/index.php?id=159&L=1
10
Hi,

I using SVM-PGSL approach for downscaling the climate data and it’s relevant paper link is attached here https://agupubs.onlinelibrary.wiley.com/doi/pdf/10.1029/2009JD013548..

In this paper I didn’t get idea to find Legragian Multipliers (you can find in the Appendix section).
Please if some one used this approach let me know and try to give solution..

And also details mentioned in the attachment

Please kindly help
Pages: [1] 2 3 ... 10