Wednesday, May 1, 2024

Pycnometer- Liquid and Solid Density Measurement

Pycnometer Measurements Calculator

Pycnometer Measurements Calculator









Measurement Value
Pycnometer volume
Liquid Density
Solid Density

Reference: Lal, R., & Shukla, M. K. (2004). Principles of soil physics. CRC Press.

Soil Properties Calculations

Question: One liter of soil has a wet weight of 1500 g, dry weight of 1200 g, and volume of soil solids of 450 cm3. Compute all 13 soil physical properties. Soil Properties Calculator

Soil Properties Calculator









Property Value
Particle Density
Bulk Density Dry
Wet Bulk Density
Particle Specific Gravity
Dry Specific Volume
Porosity
Air Porosity
Void Ratio
Air Ratio
Gravimetric Water Content
Volumetric Water Content
Liquid Ratio
Soil Saturation

Reference: Lal, R., & Shukla, M. K. (2004). Principles of soil physics. CRC Press.

Tuesday, May 16, 2023

Write your package

 #========================================================

#-- Multiple functionsctions

#=========================================================


DPGV <- function(x,y,z) {

  Bulk_density <- (y/z)

  Porosity <- (1-(Bulk_density/2.65))

  Gravimetric_water <- ((x-y)/y)

  Volumatric_water<- Gravimetric_water*Bulk_density

  volumtric_air<-Porosity-Volumatric_water

  Void_ratio<- Porosity/(1-Porosity)

  out <- table(Bulk_density,Porosity,Gravimetric_water,Volumatric_water,volumtric_air,Void_ratio)

  return(out)

}

#----------------------------------------------------------------------------

aggregate_diameter <- function(a,b,x) {

  MWD <- ((a+b)/2)*(x/200)

  GMD <- ((x/200)*(log(a+b))/(x/200))

  out <- table(MWD,GMD)

  return(out)

}

#=============================================================

#== Single function

#=============================================================

soil_weight <- function(x,y) {

  result <- 100 * 100 * x*y

  print(result)

}

##############################################################

Thursday, March 30, 2023

Linear Mixed Model (LMM)

 What is linear mixed model design and how can we apply it?

A Linear Mixed Model (LMM) is a statistical modeling approach that is used to analyze data that have both fixed and random effects, and where the observations are correlated. LMMs are a generalization of linear regression models, and they allow for the modeling of complex relationships between the response variable and the predictor variables. LMMs are commonly used in various research areas, such as social sciences, medical research, and engineering. For example, in social sciences, LMMs may be used to analyze longitudinal data where the same individuals are measured multiple times over a period of time. In medical research, LMMs may be used to analyze data from clinical trials where patients are randomized to different treatments. The application of LMM involves the specification of a statistical model that includes fixed and random effects, as well as an error term. The fixed effects are the predictor variables of interest, while the random effects account for the variation in the data that cannot be explained by the fixed effects. The error term represents the variability in the data that cannot be accounted for by the model. LMMs can be applied using various software packages, such as R, SAS, and SPSS. The analysis involves fitting the model to the data and estimating the parameters of the model using maximum likelihood or restricted maximum likelihood estimation. Once the model has been fitted, hypothesis tests and confidence intervals can be calculated for the fixed effects, and the model can be used for prediction and simulation. LMMs are a powerful statistical modeling approach that can handle complex data structures and relationships between variables. However, they can be computationally intensive and may require a large sample size to estimate the parameters of the model accurately. Additionally, the interpretation of the results of an LMM can be challenging, as the fixed and random effects are often correlated.

Example

In a Linear Mixed Model (LMM), fixed effects are predictor variables that are of primary interest to the researcher, and their effects are assumed to be constant across all experimental units. Random effects, on the other hand, are variables whose effects are assumed to be random and vary across experimental units.

An example of fixed and random effects can be illustrated in a study that aims to investigate the effect of different doses of a drug on blood pressure. In this study, the fixed effect would be the dose of the drug, as this is the predictor variable of primary interest. The different doses of the drug would be assigned to the experimental units (e.g., patients), and their effects would be estimated by the model.

The random effect in this study could be the effect of the patient, as the response variable (blood pressure) may vary across different patients due to individual differences such as genetics, lifestyle, and other factors. The random effect of the patient would account for this variability and allow for a more accurate estimate of the fixed effect of the drug.

In summary, in this example, the fixed effect would be the dose of the drug, and the random effect would be the effect of the patient on the response variable (blood pressure). By including both fixed and random effects in the LMM, we can obtain more precise estimates of the treatment effects and account for the variability due to individual differences between the experimental units.





Monday, March 27, 2023

Statistical Models in R (ANOVA & Multiple Comparison Tests)

 

Statistical Model in R

 

Linear Mixed Model

ANOVA

POD<- lme(POD ~ Year*treatment*variety + replication, random = ~1|plot/Year/replication, data=dataset)

anova(POD)

Multiple Comparison

POD.rlsm <- lsmeans(POD, ~Year*treatment*variety, adjust="tukey")

POD.rcld <- cld(POD.rlsm, alpha=0.05, Letters=letters, adjust="tukey")

POD.rt <- POD.rcld[,c(1:4,9)]

POD.rt

Complete Randomized Design (CRD)

ANOVA

aku<-lm(POD~treatment, data=dataset)

anova(aku)

Multiple Comparison test

aov.out<- aov(POD~treatment, data=dataset)

LSD.test(aov.out,"treatment", console = TRUE)

Randomized Complete Block Design (RCBD)

ANOVA

wina<- lm (POD~treatment+replication, data=dataset)

anova(wina)

Multiple Comparison test

LSD.test(winams,"treatment", console = TRUE)

RCBD Split-Plot Design

ANOVA

kebe<-with(dataset,sp.plot(replication,treatment,variety,POD))

Initial working

gla<-kebe$gl.a

glb<-kebe$gl.b

Ea<-kebe$gl.a

Eb<-kebe$gl.b

First factor

ms<-with(data_shahzad_4,LSD.test(POD,treatment,gla,Ea, console = TRUE))

Second factor

ms<-with(data_shahzad_4,LSD.test(POD,variety,glb,Eb, console = TRUE))

Interaction

ms<-with(data_shahzad_4,LSD.test(POD,treatment:variety,glb,Eb, console = TRUE))

RCBD Factorial Design

ANOVA

wina<- lm (POD~treatment+replication+variety+treatment:variety, data=dataset)

Multiple Comparison test

LSD.test(dataset$POD,dataset$treatment:dataset$variety, 54.2, 78, console = TRUE)

54.2= Mean square error

78= Degree of freedom

 

 

 

Statistical Models in R (ANOVA and multiple comparison tests)


 

Saturday, March 4, 2023

Characterstics of Normal Distribution

 

There are several characteristics or properties of a dataset that indicate that it is normally distributed. Here are some of the key things to look for:

  1. Symmetry: A normal distribution is symmetric, which means that the left half of the distribution is a mirror image of the right half. In other words, the mean, median, and mode are all equal and located at the center of the distribution.
  2. Bell-shaped curve: A normal distribution has a bell-shaped curve that is relatively smooth and continuous. The curve is highest at the mean and tapers off gradually in both directions.
  3. Empirical rule: The empirical rule, also known as the 68-95-99.7 rule, states that approximately 68% of the data falls within one standard deviation of the mean, approximately 95% falls within two standard deviations, and approximately 99.7% falls within three standard deviations.
  4. Skewness and kurtosis: A normal distribution has zero skewness and zero excess kurtosis, which means that the tails of the distribution are not too heavy or too light compared to the normal distribution.
  5. QQ plot: A QQ plot, or quantile-quantile plot, is a graphical method for comparing the distribution of the data to the normal distribution. If the data is normally distributed, the points on the QQ plot will fall along a straight line.
  6. Mean, median, and mode are equal: In a normal distribution, the mean, median, and mode are all equal and located at the center of the distribution. This is often referred to as the central tendency of the data.
  7. Probability density function: A normal distribution can be fully described by its probability density function, which is a mathematical function that describes the probability of observing a particular value or range of values in the distribution.
  8. Standard deviation: The standard deviation of a normally distributed dataset provides a measure of the spread or variability of the data. About 68% of the data falls within one standard deviation of the mean, and about 95% falls within two standard deviations.
  9. Independent, identically distributed (iid) samples: If a sample of data is drawn from a normally distributed population, and the samples are independent and identically distributed, then the sample mean will also be normally distributed.
  10. Z-scores: Z-scores, which measure the number of standard deviations a value is from the mean, are commonly used in normal distribution calculations and statistical tests.

It's important to note that not all datasets that exhibit these characteristics are necessarily normally distributed, and there are statistical tests that can be used to confirm normality. However, if a dataset displays all or most of these characteristics, it is a good indication that it is normally distributed.