Usage of Inferential Statistics
Inferential Statistics – What Type Of Statistics Is It?
Inferential statistics is a type of statistics whereby a random sample of data is picked from a given population and the information collected is used to describe and make inferences from the said population. Inferential statistics rely on collecting data on a sample of a population which is too large to measure and is often impartial or nearly impossible.
When given a hypothesis about a population, which inferences have to be drawn from, statistical inference consists of two processes. They include:
- Selection of a statistical model for the process generating the data.
- Formulating the propositions from the model.
For example, if one needs to know the weight of children in a given country, a random sample of children can be selected from the entire population, and the weight of each child from the sample is taken. He means the weight of the sample is calculated and from that, an inference is drawn and hence the weight of the entire population of children is within the specified interval of values gotten.
The interval of values is used because there is no perfect sample of representation of the entire population hence it may involve sampling error.
The statistical proposition is the conclusion of any statistical inference. Statistical propositions have different forms. The common forms include:
- An estimated point. i.e. it’s the particular value of approximation for the parameter of interest.
- An interval estimates i.e. an interval formulated from the set data drawn from the population, from which repeated samples of the dataset. The probability of the confidence level will contain intervals of the true parameter values.
- A credible interval i.e. this is the value or set of values which contain let’s say 95% of the existing belief.
- The rejection of the formulated hypothesis.
- Grouping or clustering the data points.
Inferential Statistics – Definition
This is a type of statistics that focuses on drawing inference or conclusion about the population on analysing and observing a sample. Inferential statistics are divided into two main areas:
- Estimating parameters- this is where you take analysis from your sample data and use it to estimate the population parameter.
- Tests of hypothesis- this is answering of research question by use of the data sampled.
It is good that you know, inferential statistics is only applicable in situations where a sample data collected and analysed is used as an assumption of a bigger population. Before you get deep into inferential statistics it is good to understand the terms that are used in descriptions, which include:
Population- the population is the number of people within a particular region that you are to carry out an investigation on.
A sample- is a representation of the population that you will have a chance to interview them and research them on direct interaction.
Sample size- is the number of people that you are going to choose as a representative of the rest of the population. It is good to take a good size for your sample so as to have better results. When you take very less sample you are likely to fail in coming up with the right judgement because the estimate is minimal. For example:
You might have a new drug that you need to check its effectiveness in the treatment of a certain malady. To test your drug, you will need to find people with the disease then administer the drug and measure the time span taken for them to heal. When you take fewer people, you are likely to get unreliable results unlike when you increase the number of people to cure with your drug hence, the sample size is very key when it comes to inferential statistics.
You can conduct the sampling for a particular region and depend on the trend obtained from that, you go ahead and make assumptions for the rest of the regions as they exhibit the same traits. Some of the main indexes used in inferential statistics include;
- Confidence intervals
- Central limit theorem
- Comparison of mean
- Regression analysis
- T- distributions
- Normal distributions
- Binomial theorem
The null hypothesis is a type of hypothesis in statistics used to suggest that there is no statistical significance which can exist from a given set of observations. Null hypothesis tries to verify that between variables no variation exists or that given a single variable there’s no difference from its calculate mean. The statistical data obtained from the null hypothesis is presumed to be correct until statistical evidence is provided to cancel it out for an alternative hypothesis.
The null hypothesis or the conjecture presumes that any given kind of significance or difference you not in a set of data is attributable to chance or occurs randomly. The null hypothesis is the existing statistical assertion that a given population mean is the equal of the claimed. For example, assuming that the average time to travel to the next town is 40 minutes. Hence, the null hypothesis would be stated as “the population mean is equal to 40 minutes.”
Often the null hypothesis claims that there is no difference or association between a given set of variables. Often, people misunderstand “null” to imply “zero” this is not always the case. For example, a null hypothesis may also state that
- The correlation between poverty and depression is 0.5.
In the above example there is no zero involved and although it may be unusual it is valid too. The null hypothesis is derived from “nullify”: the null hypothesis is a statement which can be refuted regardless of it not specifying a zero effect.
In order to test a null hypothesis, we need to know how it works. For example, I want to know if depression is related to poverty among a certain group of people in a country. An approach to this is to formulate a null hypothesis. Since the phrase “related to” is not accurate, we choose a statement which is contrary to our null hypothesis:
- The correlation between depression and poverty is zero in a certain country.
We can try to contravene the above hypothesis in order to demonstrate that poverty and depression are related. We can’t possibly ask all the people in that country how depressed the generally are. A sample is taken from the population and the population is asked about their poverty and their depression.
p- values and examples
What is the “P-value”
the p-value is the level of marginal significance in a statistical hypothesis test that represents the probability of a given event to occur. P-values are used as alternatives to rejection point to provide the least level of importance at which the rejection of null hypothesis would be.
For a stronger evidence which is in favour of the alternative hypothesis, a smaller p-value has to be obtained i.e. the p-value obtained is less than the said significance level hence rejecting the null hypothesis. P-values in statistical hypothesis testing is common an applied in various fields of research such as; biology, physics, economics and finance.
p-value tables or spreadsheets are used to calculate p-values. For easy comparison of results, researchers use the hypothesis test to feature the p-values. For this reason, it allows the reader to easily interpret the statistical data. This is referred to as the p-value approach to hypothesis testing.
P-value approach to hypothesis testing
the p-value approach to hypothesis testing uses the probability calculated to know whether the null hypothesis can be rejected given the evidence. The null hypothesis is the existing or the occurring claim about a given set of statistical data. On the other hand, the alternative hypothesis claims that the population statistics is different from the value of the population statistics stated in the null hypothesis.
In application, the p-values, are clearly specified prior to determining how the null hypothesis can be rejected given the required value. Since the obtained p-values are not exact but rather relies on statistical data obtained from a random population sample and may at times if not often be presumed to be exact. In such a case there are errors from the hypothesis. The two types of errors are the type I and type II error.
- Type I error is where the null hypothesis is rejected falsely.
- Type II error is where the null hypothesis is falsely accepted.
Type I error
Type I error is the rejection of the null hypothesis falsely. the critical value used is equivalent to the probability of type I error occurring or the null hypothesis is rejected when it is true. If the null hypothesis is true, the probability of being it being accepted is equivalent to the critical value subtracted from 1.
Type II error
In this error, the null hypothesis is falsely accepted.
You can easily perfect your writing skills on inferential statistics by following the above guidelines and going through various samples of other people. When you go through the examples you get to understand the format of writing and within no time you will be a pro.