"사후 확률"의 두 판 사이의 차이
둘러보기로 가기
검색하러 가기
Pythagoras0 (토론 | 기여) (→노트: 새 문단) |
Pythagoras0 (토론 | 기여) |
||
| (같은 사용자의 중간 판 3개는 보이지 않습니다) | |||
| 4번째 줄: | 4번째 줄: | ||
* ID : [https://www.wikidata.org/wiki/Q278079 Q278079] | * ID : [https://www.wikidata.org/wiki/Q278079 Q278079] | ||
===말뭉치=== | ===말뭉치=== | ||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
# In statistics, the posterior probability expresses how likely a hypothesis is given a particular set of data.<ref name="ref_a81398d2">[https://deepai.org/machine-learning-glossary-and-terms/posterior-probability Posterior Probability]</ref> | # In statistics, the posterior probability expresses how likely a hypothesis is given a particular set of data.<ref name="ref_a81398d2">[https://deepai.org/machine-learning-glossary-and-terms/posterior-probability Posterior Probability]</ref> | ||
| − | # | + | # Posterior probability is the probability an event will happen after all evidence or background information has been taken into account.<ref name="ref_29a2ab49">[https://www.statisticshowto.com/posterior-distribution-probability/ Posterior Probability & the Posterior Distribution]</ref> |
| − | + | # The posterior distribution is a way to summarize what we know about uncertain quantities in Bayesian analysis.<ref name="ref_29a2ab49" /> | |
| − | # | + | # In other words, the posterior distribution summarizes what you know after the data has been observed.<ref name="ref_29a2ab49" /> |
| − | |||
| − | # In | ||
| − | |||
| − | |||
| − | |||
| − | |||
# Posterior probability is a conditional probability conditioned on randomly observed data.<ref name="ref_2b57100d">[https://en.wikipedia.org/wiki/Posterior_probability Posterior probability]</ref> | # Posterior probability is a conditional probability conditioned on randomly observed data.<ref name="ref_2b57100d">[https://en.wikipedia.org/wiki/Posterior_probability Posterior probability]</ref> | ||
# In classification, posterior probabilities reflect the uncertainty of assessing an observation to particular class, see also Class membership probabilities.<ref name="ref_2b57100d" /> | # In classification, posterior probabilities reflect the uncertainty of assessing an observation to particular class, see also Class membership probabilities.<ref name="ref_2b57100d" /> | ||
| 41번째 줄: | 15번째 줄: | ||
# Posterior probability distributions should be a better reflection of the underlying truth of a data generating process than the prior probability since the posterior included more information.<ref name="ref_8a087098" /> | # Posterior probability distributions should be a better reflection of the underlying truth of a data generating process than the prior probability since the posterior included more information.<ref name="ref_8a087098" /> | ||
# A posterior probability can subsequently become a prior for a new updated posterior probability as new information arises and is incorporated into the analysis.<ref name="ref_8a087098" /> | # A posterior probability can subsequently become a prior for a new updated posterior probability as new information arises and is incorporated into the analysis.<ref name="ref_8a087098" /> | ||
| − | # Posterior probability is the probability an | + | # For most problems encountered in fisheries stock assessment, it is impossible to evaluate the posterior distribution (Equation 1.5) analytically.<ref name="ref_87945e98">[http://www.fao.org/3/Y1958E/y1958e04.htm 2. METHODS FOR COMPUTING POSTERIOR DISTRIBUTIONS]</ref> |
| − | # | + | # Let q denote the parameter vector and p( q ), the posterior probability of the parameter vector q .<ref name="ref_87945e98" /> |
| − | # In | + | # q i : i=1,2,...} from the posterior distribution, p( q ), or to determine the relative posterior probabilities for a set of pre-specified vectors of parameters.<ref name="ref_87945e98" /> |
| + | # The posterior probability for each combination of the two parameters is then calculated using Equation (1.6).<ref name="ref_87945e98" /> | ||
| + | # This can be done by computing the quantiles of the posterior distribution.<ref name="ref_7c913a98">[https://stephens999.github.io/fiveMinuteStats/summarize_interpret_posterior.html Summarizing and Interpreting the Posterior (analytic)]</ref> | ||
| + | # Essentially this boils down to summarizing the posterior distribution by a single number.<ref name="ref_7c913a98" /> | ||
| + | # When \(q\) is a continuous-valued variable, as here, the most common Bayesian point estimate is the mean (or expectation) of the posterior distribution, which is called the “posterior mean”.<ref name="ref_7c913a98" /> | ||
| + | # The posterior probability is one of the quantities involved in Bayes' rule.<ref name="ref_13adb8dd">[https://www.statlect.com/glossary/posterior-probability Posterior probability]</ref> | ||
| + | # In sum, contrary to the common account, naive respondents do not perform well on tasks devised to improve their understanding of posterior probability.<ref name="ref_ce0da5e8">[https://www.frontiersin.org/articles/10.3389/fpsyg.2015.00680/full Basic understanding of posterior probability]</ref> | ||
| + | # Two notes are in order about the tasks that have documented the existence of an early understanding of prior and posterior probability (e.g., Task B and B').<ref name="ref_ce0da5e8" /> | ||
| + | # …of all joint probabilities, the posterior probability is arrived at.<ref name="ref_f4d25829">[https://www.britannica.com/science/posterior-probability Posterior probability | genetics]</ref> | ||
| + | # Posterior probability is the likelihood that the individual, whose genotype is uncertain, either carries the mutant gene or does not.<ref name="ref_f4d25829" /> | ||
| + | # Range of the posterior probability of an interval over the ε-contamination class Γ={π=(1−ε)π 0 +εq:qεQ} is derived.<ref name="ref_304a75f8">[https://link.springer.com/article/10.1007/BF00773678 Range of the posterior probability of an interval for priors with unimodality preserving contaminations]</ref> | ||
| + | # We show that the sup (resp. inf) of the posterior probability of an interval is attained by a prior which is equal to (1−ε)π 0 except in one interval (resp.<ref name="ref_304a75f8" /> | ||
| + | # In Scott (2002) and Congdon (2006), a new method is advanced to compute posterior probabilities of models under consideration.<ref name="ref_d34d19da">[https://projecteuclid.org/euclid.ba/1340370554 Robert , Marin : On some difficulties with a posterior probability approximation technique]</ref> | ||
| + | # While it is indeed possible to approximate posterior probabilities based solely on MCMC outputs from single models, as demonstrated by Gelfand and Dey (1994) and Bartolucci et al.<ref name="ref_d34d19da" /> | ||
| + | # Bayes' theorem can be used to estimate the posterior probabilities, that is, the probabilities that an email which is received as spam, really is spam, or is in fact legitimate.<ref name="ref_4b248632">[https://onlinelibrary.wiley.com/doi/abs/10.1002/9781119536963.ch7 Posterior Probability and Bayes]</ref> | ||
| + | # A posterior probability is the probability of assigning observations to groups given the data.<ref name="ref_90c92a10">[https://support.minitab.com/en-us/minitab/18/help-and-how-to/modeling-statistics/multivariate/supporting-topics/discriminant-analysis/what-are-posterior-and-prior-probabilities/ What are posterior probabilities and prior probabilities?]</ref> | ||
| + | # If you know or can estimate these probabilities, a discriminant analysis can use these prior probabilities in calculating the posterior probabilities.<ref name="ref_90c92a10" /> | ||
# P(Θ|data) on the left hand side is known as the posterior distribution.<ref name="ref_cfb451d7">[https://towardsdatascience.com/probability-concepts-explained-bayesian-inference-for-parameter-estimation-90e8930e5348 Probability concepts explained: Bayesian inference for parameter estimation.]</ref> | # P(Θ|data) on the left hand side is known as the posterior distribution.<ref name="ref_cfb451d7">[https://towardsdatascience.com/probability-concepts-explained-bayesian-inference-for-parameter-estimation-90e8930e5348 Probability concepts explained: Bayesian inference for parameter estimation.]</ref> | ||
# We don’t care about the normalising constant so we have everything we need to calculate the unnormalised posterior distribution.<ref name="ref_cfb451d7" /> | # We don’t care about the normalising constant so we have everything we need to calculate the unnormalised posterior distribution.<ref name="ref_cfb451d7" /> | ||
# Now we have the posterior distribution for the length of a hydrogen bond we can derive statistics from it.<ref name="ref_cfb451d7" /> | # Now we have the posterior distribution for the length of a hydrogen bond we can derive statistics from it.<ref name="ref_cfb451d7" /> | ||
# One of the most common statistics calculated from the posterior distribution is the mode.<ref name="ref_cfb451d7" /> | # One of the most common statistics calculated from the posterior distribution is the mode.<ref name="ref_cfb451d7" /> | ||
| + | # This approximation to the likelihood function was used because a full characterization of the posterior probability function had not yet been performed.<ref name="ref_4f6edea1">[https://www.spiedigitallibrary.org/conference-proceedings-of-spie/1443/0000/Task-performance-based-on-the-posterior-probability-of-maximum-entropy/10.1117/12.43439.full Task performance based on the posterior probability of maximum-entropy reconstructions obtained with MEMSYS 3]</ref> | ||
| + | # In this case the reconstruction is chosen to maximize the posterior probability and task performance involves using the posterior probability of the various alternatives as the decision variable.<ref name="ref_4f6edea1" /> | ||
| + | # The results demonstrate the improvement in detection performance that can be achieved when the full posterior probability function is incorporated into the decision variable.<ref name="ref_4f6edea1" /> | ||
| + | # Given any report r submitted to the CPS server, MLA guarantees that the posterior probability of each query content in r is larger than 0.<ref name="ref_d3a22afc">[https://www.thefreedictionary.com/posterior+probability posterior probability]</ref> | ||
| + | # Posterior probability is a revised probability that takes into account new available information.<ref name="ref_433e34d2">[https://www.statistics.com/glossary/posterior-probability/ Posterior Probability]</ref> | ||
| + | # A description of the network structure is given first, followed by an explanation of the posterior probability-updating algorithm.<ref name="ref_2d3080b3">[https://dictionary.cambridge.org/dictionary/english/posterior-probability posterior probability]</ref> | ||
| + | # Individuals were assigned to the group with the largest posterior probability estimate.<ref name="ref_2d3080b3" /> | ||
| + | # Prior to observing x, this distribution is the prior probability p(h); after observing x, it is the posterior probability p(h x).<ref name="ref_2d3080b3" /> | ||
| + | # At the same time, neither effect size, nor confidence interval estimate, nor posterior probability can be used to exclude chance as an explanation of data.<ref name="ref_2d3080b3" /> | ||
| + | # There are many ways to summarize the posterior distribution of tree topologies and branch lengths.<ref name="ref_188b6544">[https://www.beast2.org/summarizing-posterior-trees/ BEAST 2]</ref> | ||
| + | # The 50% majority consensus tree is a tree constructed so that it contains all of the clades that occur in at least 50% of the trees in the posterior distribution.<ref name="ref_188b6544" /> | ||
| + | # In other words it contains only the clades that have a posterior probability of >= 50%.<ref name="ref_188b6544" /> | ||
| + | # It has sometimes been used to describe the tree associated with the sampled state in the MCMC chain that has the highest posterior probability density.<ref name="ref_188b6544" /> | ||
| + | # We conclude that the posterior probability of H 0 provide a much more conservative quantification of the mode detection than the significance level.<ref name="ref_d50d2b18">[https://www.aanda.org/articles/aa/abs/2009/40/aa10990-08/aa10990-08.html On posterior probability and significance level: application to the power spectrum of HD 49 933 observed by CoRoT]</ref> | ||
| + | # In this contribution we show that the algorithm can also be used to estimate the posterior probability, or the confidence of its decision on each test instance.<ref name="ref_929e1d01">[https://unpaywall.org/10.1007%2F11494683_33 Ensemble Confidence Estimates Posterior Probability]</ref> | ||
| + | # Uniform prior probabilities allow a frequentist posterior probability distribution of a study result’s replication to be calculated conditional solely on the study’s observations.<ref name="ref_1fe7bc7f">[https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0212302 Replacing P-values with frequentist posterior probabilities of replication—When possible parameter values must have uniform marginal prior probabilities]</ref> | ||
| + | # Attempts have been made to calculate posterior probabilities by avoiding an explicitly Bayesian approach.<ref name="ref_1fe7bc7f" /> | ||
| + | # This will provide the posterior probability of each possible true result from Θ 1 to Q 101 .<ref name="ref_1fe7bc7f" /> | ||
| + | # The curve markers represent actual likelihood and posterior probabilities.<ref name="ref_1fe7bc7f" /> | ||
===소스=== | ===소스=== | ||
<references /> | <references /> | ||
| + | |||
| + | ==메타데이터== | ||
| + | ===위키데이터=== | ||
| + | * ID : [https://www.wikidata.org/wiki/Q278079 Q278079] | ||
| + | ===Spacy 패턴 목록=== | ||
| + | * [{'LOWER': 'posterior'}, {'LEMMA': 'probability'}] | ||
| + | * [{'LOWER': 'posterior'}, {'LOWER': 'probability'}, {'LEMMA': 'distribution'}] | ||
| + | * [{'LOWER': 'posterior'}, {'LEMMA': 'distribution'}] | ||
| + | * [{'LOWER': 'posterior'}, {'LEMMA': 'probability'}] | ||
| + | * [{'LOWER': 'posterior'}, {'LOWER': 'probability'}, {'LOWER': 'density'}, {'LEMMA': 'function'}] | ||
| + | * [{'LOWER': 'a'}, {'LOWER': 'posteriori'}, {'LEMMA': 'distribution'}] | ||
| + | * [{'LOWER': 'relative'}, {'LOWER': 'frequency'}, {'LEMMA': 'probability'}] | ||
| + | * [{'LOWER': 'a'}, {'LOWER': 'posterior'}, {'LEMMA': 'probability'}] | ||
2021년 2월 17일 (수) 01:21 기준 최신판
노트
위키데이터
- ID : Q278079
말뭉치
- In statistics, the posterior probability expresses how likely a hypothesis is given a particular set of data.[1]
- Posterior probability is the probability an event will happen after all evidence or background information has been taken into account.[2]
- The posterior distribution is a way to summarize what we know about uncertain quantities in Bayesian analysis.[2]
- In other words, the posterior distribution summarizes what you know after the data has been observed.[2]
- Posterior probability is a conditional probability conditioned on randomly observed data.[3]
- In classification, posterior probabilities reflect the uncertainty of assessing an observation to particular class, see also Class membership probabilities.[3]
- While statistical classification methods by definition generate posterior probabilities, Machine Learners usually supply membership values which do not induce any probabilistic confidence.[3]
- A posterior probability, in Bayesian statistics, is the revised or updated probability of an event occurring after taking into consideration new information.[4]
- The posterior probability is calculated by updating the prior probability using Bayes' theorem.[4]
- Posterior probability distributions should be a better reflection of the underlying truth of a data generating process than the prior probability since the posterior included more information.[4]
- A posterior probability can subsequently become a prior for a new updated posterior probability as new information arises and is incorporated into the analysis.[4]
- For most problems encountered in fisheries stock assessment, it is impossible to evaluate the posterior distribution (Equation 1.5) analytically.[5]
- Let q denote the parameter vector and p( q ), the posterior probability of the parameter vector q .[5]
- q i : i=1,2,...} from the posterior distribution, p( q ), or to determine the relative posterior probabilities for a set of pre-specified vectors of parameters.[5]
- The posterior probability for each combination of the two parameters is then calculated using Equation (1.6).[5]
- This can be done by computing the quantiles of the posterior distribution.[6]
- Essentially this boils down to summarizing the posterior distribution by a single number.[6]
- When \(q\) is a continuous-valued variable, as here, the most common Bayesian point estimate is the mean (or expectation) of the posterior distribution, which is called the “posterior mean”.[6]
- The posterior probability is one of the quantities involved in Bayes' rule.[7]
- In sum, contrary to the common account, naive respondents do not perform well on tasks devised to improve their understanding of posterior probability.[8]
- Two notes are in order about the tasks that have documented the existence of an early understanding of prior and posterior probability (e.g., Task B and B').[8]
- …of all joint probabilities, the posterior probability is arrived at.[9]
- Posterior probability is the likelihood that the individual, whose genotype is uncertain, either carries the mutant gene or does not.[9]
- Range of the posterior probability of an interval over the ε-contamination class Γ={π=(1−ε)π 0 +εq:qεQ} is derived.[10]
- We show that the sup (resp. inf) of the posterior probability of an interval is attained by a prior which is equal to (1−ε)π 0 except in one interval (resp.[10]
- In Scott (2002) and Congdon (2006), a new method is advanced to compute posterior probabilities of models under consideration.[11]
- While it is indeed possible to approximate posterior probabilities based solely on MCMC outputs from single models, as demonstrated by Gelfand and Dey (1994) and Bartolucci et al.[11]
- Bayes' theorem can be used to estimate the posterior probabilities, that is, the probabilities that an email which is received as spam, really is spam, or is in fact legitimate.[12]
- A posterior probability is the probability of assigning observations to groups given the data.[13]
- If you know or can estimate these probabilities, a discriminant analysis can use these prior probabilities in calculating the posterior probabilities.[13]
- P(Θ|data) on the left hand side is known as the posterior distribution.[14]
- We don’t care about the normalising constant so we have everything we need to calculate the unnormalised posterior distribution.[14]
- Now we have the posterior distribution for the length of a hydrogen bond we can derive statistics from it.[14]
- One of the most common statistics calculated from the posterior distribution is the mode.[14]
- This approximation to the likelihood function was used because a full characterization of the posterior probability function had not yet been performed.[15]
- In this case the reconstruction is chosen to maximize the posterior probability and task performance involves using the posterior probability of the various alternatives as the decision variable.[15]
- The results demonstrate the improvement in detection performance that can be achieved when the full posterior probability function is incorporated into the decision variable.[15]
- Given any report r submitted to the CPS server, MLA guarantees that the posterior probability of each query content in r is larger than 0.[16]
- Posterior probability is a revised probability that takes into account new available information.[17]
- A description of the network structure is given first, followed by an explanation of the posterior probability-updating algorithm.[18]
- Individuals were assigned to the group with the largest posterior probability estimate.[18]
- Prior to observing x, this distribution is the prior probability p(h); after observing x, it is the posterior probability p(h x).[18]
- At the same time, neither effect size, nor confidence interval estimate, nor posterior probability can be used to exclude chance as an explanation of data.[18]
- There are many ways to summarize the posterior distribution of tree topologies and branch lengths.[19]
- The 50% majority consensus tree is a tree constructed so that it contains all of the clades that occur in at least 50% of the trees in the posterior distribution.[19]
- In other words it contains only the clades that have a posterior probability of >= 50%.[19]
- It has sometimes been used to describe the tree associated with the sampled state in the MCMC chain that has the highest posterior probability density.[19]
- We conclude that the posterior probability of H 0 provide a much more conservative quantification of the mode detection than the significance level.[20]
- In this contribution we show that the algorithm can also be used to estimate the posterior probability, or the confidence of its decision on each test instance.[21]
- Uniform prior probabilities allow a frequentist posterior probability distribution of a study result’s replication to be calculated conditional solely on the study’s observations.[22]
- Attempts have been made to calculate posterior probabilities by avoiding an explicitly Bayesian approach.[22]
- This will provide the posterior probability of each possible true result from Θ 1 to Q 101 .[22]
- The curve markers represent actual likelihood and posterior probabilities.[22]
소스
- ↑ Posterior Probability
- ↑ 2.0 2.1 2.2 Posterior Probability & the Posterior Distribution
- ↑ 3.0 3.1 3.2 Posterior probability
- ↑ 4.0 4.1 4.2 4.3 Posterior Probability Definition
- ↑ 5.0 5.1 5.2 5.3 2. METHODS FOR COMPUTING POSTERIOR DISTRIBUTIONS
- ↑ 6.0 6.1 6.2 Summarizing and Interpreting the Posterior (analytic)
- ↑ Posterior probability
- ↑ 8.0 8.1 Basic understanding of posterior probability
- ↑ 9.0 9.1 Posterior probability | genetics
- ↑ 10.0 10.1 Range of the posterior probability of an interval for priors with unimodality preserving contaminations
- ↑ 11.0 11.1 Robert , Marin : On some difficulties with a posterior probability approximation technique
- ↑ Posterior Probability and Bayes
- ↑ 13.0 13.1 What are posterior probabilities and prior probabilities?
- ↑ 14.0 14.1 14.2 14.3 Probability concepts explained: Bayesian inference for parameter estimation.
- ↑ 15.0 15.1 15.2 Task performance based on the posterior probability of maximum-entropy reconstructions obtained with MEMSYS 3
- ↑ posterior probability
- ↑ Posterior Probability
- ↑ 18.0 18.1 18.2 18.3 posterior probability
- ↑ 19.0 19.1 19.2 19.3 BEAST 2
- ↑ On posterior probability and significance level: application to the power spectrum of HD 49 933 observed by CoRoT
- ↑ Ensemble Confidence Estimates Posterior Probability
- ↑ 22.0 22.1 22.2 22.3 Replacing P-values with frequentist posterior probabilities of replication—When possible parameter values must have uniform marginal prior probabilities
메타데이터
위키데이터
- ID : Q278079
Spacy 패턴 목록
- [{'LOWER': 'posterior'}, {'LEMMA': 'probability'}]
- [{'LOWER': 'posterior'}, {'LOWER': 'probability'}, {'LEMMA': 'distribution'}]
- [{'LOWER': 'posterior'}, {'LEMMA': 'distribution'}]
- [{'LOWER': 'posterior'}, {'LEMMA': 'probability'}]
- [{'LOWER': 'posterior'}, {'LOWER': 'probability'}, {'LOWER': 'density'}, {'LEMMA': 'function'}]
- [{'LOWER': 'a'}, {'LOWER': 'posteriori'}, {'LEMMA': 'distribution'}]
- [{'LOWER': 'relative'}, {'LOWER': 'frequency'}, {'LEMMA': 'probability'}]
- [{'LOWER': 'a'}, {'LOWER': 'posterior'}, {'LEMMA': 'probability'}]