<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="ko">
	<id>https://wiki.mathnt.net/index.php?action=history&amp;feed=atom&amp;title=%EA%B3%BC%EC%A0%81%ED%95%A9</id>
	<title>과적합 - 편집 역사</title>
	<link rel="self" type="application/atom+xml" href="https://wiki.mathnt.net/index.php?action=history&amp;feed=atom&amp;title=%EA%B3%BC%EC%A0%81%ED%95%A9"/>
	<link rel="alternate" type="text/html" href="https://wiki.mathnt.net/index.php?title=%EA%B3%BC%EC%A0%81%ED%95%A9&amp;action=history"/>
	<updated>2026-04-05T02:46:23Z</updated>
	<subtitle>이 문서의 편집 역사</subtitle>
	<generator>MediaWiki 1.35.0</generator>
	<entry>
		<id>https://wiki.mathnt.net/index.php?title=%EA%B3%BC%EC%A0%81%ED%95%A9&amp;diff=52616&amp;oldid=prev</id>
		<title>Pythagoras0: /* 위키데이터 */</title>
		<link rel="alternate" type="text/html" href="https://wiki.mathnt.net/index.php?title=%EA%B3%BC%EC%A0%81%ED%95%A9&amp;diff=52616&amp;oldid=prev"/>
		<updated>2021-02-22T07:44:13Z</updated>

		<summary type="html">&lt;p&gt;&lt;span dir=&quot;auto&quot;&gt;&lt;span class=&quot;autocomment&quot;&gt;위키데이터&lt;/span&gt;&lt;/span&gt;&lt;/p&gt;
&lt;table class=&quot;diff diff-contentalign-left diff-editfont-monospace&quot; data-mw=&quot;interface&quot;&gt;
				&lt;col class=&quot;diff-marker&quot; /&gt;
				&lt;col class=&quot;diff-content&quot; /&gt;
				&lt;col class=&quot;diff-marker&quot; /&gt;
				&lt;col class=&quot;diff-content&quot; /&gt;
				&lt;tr class=&quot;diff-title&quot; lang=&quot;ko&quot;&gt;
				&lt;td colspan=&quot;2&quot; style=&quot;background-color: #fff; color: #202122; text-align: center;&quot;&gt;← 이전 판&lt;/td&gt;
				&lt;td colspan=&quot;2&quot; style=&quot;background-color: #fff; color: #202122; text-align: center;&quot;&gt;2021년 2월 22일 (월) 07:44 판&lt;/td&gt;
				&lt;/tr&gt;&lt;tr&gt;&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot; id=&quot;mw-diff-left-l1&quot; &gt;1번째 줄:&lt;/td&gt;
&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot;&gt;1번째 줄:&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&#039;diff-marker&#039;&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;== 노트 ==&lt;/div&gt;&lt;/td&gt;&lt;td class=&#039;diff-marker&#039;&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;== 노트 ==&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&#039;diff-marker&#039;&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;td class=&#039;diff-marker&#039;&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&#039;diff-marker&#039;&gt;−&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&lt;del style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;===위키데이터===&lt;/del&gt;&lt;/div&gt;&lt;/td&gt;&lt;td colspan=&quot;2&quot;&gt; &lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&#039;diff-marker&#039;&gt;−&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&lt;del style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;* ID :  [https://www.wikidata.org/wiki/Q331309 Q331309]&lt;/del&gt;&lt;/div&gt;&lt;/td&gt;&lt;td colspan=&quot;2&quot;&gt; &lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&#039;diff-marker&#039;&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;===말뭉치===&lt;/div&gt;&lt;/td&gt;&lt;td class=&#039;diff-marker&#039;&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;===말뭉치===&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&#039;diff-marker&#039;&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;# To lessen the chance of, or amount of, overfitting, several techniques are available (e.g. model comparison, cross-validation, regularization, early stopping, pruning, Bayesian priors, or dropout).&amp;lt;ref name=&amp;quot;ref_fc8b4642&amp;quot;&amp;gt;[https://en.wikipedia.org/wiki/Overfitting Overfitting]&amp;lt;/ref&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;td class=&#039;diff-marker&#039;&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;# To lessen the chance of, or amount of, overfitting, several techniques are available (e.g. model comparison, cross-validation, regularization, early stopping, pruning, Bayesian priors, or dropout).&amp;lt;ref name=&amp;quot;ref_fc8b4642&amp;quot;&amp;gt;[https://en.wikipedia.org/wiki/Overfitting Overfitting]&amp;lt;/ref&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;/table&gt;</summary>
		<author><name>Pythagoras0</name></author>
	</entry>
	<entry>
		<id>https://wiki.mathnt.net/index.php?title=%EA%B3%BC%EC%A0%81%ED%95%A9&amp;diff=51293&amp;oldid=prev</id>
		<title>2021년 2월 17일 (수) 08:12에 Pythagoras0님의 편집</title>
		<link rel="alternate" type="text/html" href="https://wiki.mathnt.net/index.php?title=%EA%B3%BC%EC%A0%81%ED%95%A9&amp;diff=51293&amp;oldid=prev"/>
		<updated>2021-02-17T08:12:54Z</updated>

		<summary type="html">&lt;p&gt;&lt;/p&gt;
&lt;table class=&quot;diff diff-contentalign-left diff-editfont-monospace&quot; data-mw=&quot;interface&quot;&gt;
				&lt;col class=&quot;diff-marker&quot; /&gt;
				&lt;col class=&quot;diff-content&quot; /&gt;
				&lt;col class=&quot;diff-marker&quot; /&gt;
				&lt;col class=&quot;diff-content&quot; /&gt;
				&lt;tr class=&quot;diff-title&quot; lang=&quot;ko&quot;&gt;
				&lt;td colspan=&quot;2&quot; style=&quot;background-color: #fff; color: #202122; text-align: center;&quot;&gt;← 이전 판&lt;/td&gt;
				&lt;td colspan=&quot;2&quot; style=&quot;background-color: #fff; color: #202122; text-align: center;&quot;&gt;2021년 2월 17일 (수) 08:12 판&lt;/td&gt;
				&lt;/tr&gt;&lt;tr&gt;&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot; id=&quot;mw-diff-left-l110&quot; &gt;110번째 줄:&lt;/td&gt;
&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot;&gt;110번째 줄:&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&#039;diff-marker&#039;&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;  &amp;lt;references /&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;td class=&#039;diff-marker&#039;&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;  &amp;lt;references /&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&#039;diff-marker&#039;&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;td class=&#039;diff-marker&#039;&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&#039;diff-marker&#039;&gt;−&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;== 메타데이터 ==&lt;/div&gt;&lt;/td&gt;&lt;td class=&#039;diff-marker&#039;&gt;+&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;==메타데이터==&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&#039;diff-marker&#039;&gt;−&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt; &lt;/div&gt;&lt;/td&gt;&lt;td colspan=&quot;2&quot;&gt; &lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&#039;diff-marker&#039;&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;===위키데이터===&lt;/div&gt;&lt;/td&gt;&lt;td class=&#039;diff-marker&#039;&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;===위키데이터===&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&#039;diff-marker&#039;&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;* ID :  [https://www.wikidata.org/wiki/Q331309 Q331309]&lt;/div&gt;&lt;/td&gt;&lt;td class=&#039;diff-marker&#039;&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;* ID :  [https://www.wikidata.org/wiki/Q331309 Q331309]&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td colspan=&quot;2&quot;&gt; &lt;/td&gt;&lt;td class=&#039;diff-marker&#039;&gt;+&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;===Spacy 패턴 목록===&lt;/ins&gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td colspan=&quot;2&quot;&gt; &lt;/td&gt;&lt;td class=&#039;diff-marker&#039;&gt;+&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;* [{&amp;#039;LEMMA&amp;#039;: &amp;#039;overfitting&amp;#039;}]&lt;/ins&gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;/table&gt;</summary>
		<author><name>Pythagoras0</name></author>
	</entry>
	<entry>
		<id>https://wiki.mathnt.net/index.php?title=%EA%B3%BC%EC%A0%81%ED%95%A9&amp;diff=47090&amp;oldid=prev</id>
		<title>Pythagoras0: /* 메타데이터 */ 새 문단</title>
		<link rel="alternate" type="text/html" href="https://wiki.mathnt.net/index.php?title=%EA%B3%BC%EC%A0%81%ED%95%A9&amp;diff=47090&amp;oldid=prev"/>
		<updated>2020-12-26T12:20:09Z</updated>

		<summary type="html">&lt;p&gt;&lt;span dir=&quot;auto&quot;&gt;&lt;span class=&quot;autocomment&quot;&gt;메타데이터: &lt;/span&gt; 새 문단&lt;/span&gt;&lt;/p&gt;
&lt;table class=&quot;diff diff-contentalign-left diff-editfont-monospace&quot; data-mw=&quot;interface&quot;&gt;
				&lt;col class=&quot;diff-marker&quot; /&gt;
				&lt;col class=&quot;diff-content&quot; /&gt;
				&lt;col class=&quot;diff-marker&quot; /&gt;
				&lt;col class=&quot;diff-content&quot; /&gt;
				&lt;tr class=&quot;diff-title&quot; lang=&quot;ko&quot;&gt;
				&lt;td colspan=&quot;2&quot; style=&quot;background-color: #fff; color: #202122; text-align: center;&quot;&gt;← 이전 판&lt;/td&gt;
				&lt;td colspan=&quot;2&quot; style=&quot;background-color: #fff; color: #202122; text-align: center;&quot;&gt;2020년 12월 26일 (토) 12:20 판&lt;/td&gt;
				&lt;/tr&gt;&lt;tr&gt;&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot; id=&quot;mw-diff-left-l109&quot; &gt;109번째 줄:&lt;/td&gt;
&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot;&gt;109번째 줄:&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&#039;diff-marker&#039;&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;===소스===&lt;/div&gt;&lt;/td&gt;&lt;td class=&#039;diff-marker&#039;&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;===소스===&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&#039;diff-marker&#039;&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;  &amp;lt;references /&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;td class=&#039;diff-marker&#039;&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;  &amp;lt;references /&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td colspan=&quot;2&quot;&gt; &lt;/td&gt;&lt;td class=&#039;diff-marker&#039;&gt;+&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;&lt;/ins&gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td colspan=&quot;2&quot;&gt; &lt;/td&gt;&lt;td class=&#039;diff-marker&#039;&gt;+&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;== 메타데이터 ==&lt;/ins&gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td colspan=&quot;2&quot;&gt; &lt;/td&gt;&lt;td class=&#039;diff-marker&#039;&gt;+&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;&lt;/ins&gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td colspan=&quot;2&quot;&gt; &lt;/td&gt;&lt;td class=&#039;diff-marker&#039;&gt;+&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;===위키데이터===&lt;/ins&gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td colspan=&quot;2&quot;&gt; &lt;/td&gt;&lt;td class=&#039;diff-marker&#039;&gt;+&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;* ID :  [https://www.wikidata.org/wiki/Q331309 Q331309]&lt;/ins&gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;/table&gt;</summary>
		<author><name>Pythagoras0</name></author>
	</entry>
	<entry>
		<id>https://wiki.mathnt.net/index.php?title=%EA%B3%BC%EC%A0%81%ED%95%A9&amp;diff=46230&amp;oldid=prev</id>
		<title>Pythagoras0: /* 노트 */ 새 문단</title>
		<link rel="alternate" type="text/html" href="https://wiki.mathnt.net/index.php?title=%EA%B3%BC%EC%A0%81%ED%95%A9&amp;diff=46230&amp;oldid=prev"/>
		<updated>2020-12-21T09:57:56Z</updated>

		<summary type="html">&lt;p&gt;&lt;span dir=&quot;auto&quot;&gt;&lt;span class=&quot;autocomment&quot;&gt;노트: &lt;/span&gt; 새 문단&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;&lt;b&gt;새 문서&lt;/b&gt;&lt;/p&gt;&lt;div&gt;== 노트 ==&lt;br /&gt;
&lt;br /&gt;
===위키데이터===&lt;br /&gt;
* ID :  [https://www.wikidata.org/wiki/Q331309 Q331309]&lt;br /&gt;
===말뭉치===&lt;br /&gt;
# To lessen the chance of, or amount of, overfitting, several techniques are available (e.g. model comparison, cross-validation, regularization, early stopping, pruning, Bayesian priors, or dropout).&amp;lt;ref name=&amp;quot;ref_fc8b4642&amp;quot;&amp;gt;[https://en.wikipedia.org/wiki/Overfitting Overfitting]&amp;lt;/ref&amp;gt;&lt;br /&gt;
# Overfitting is more likely to be a serious concern when there is little theory available to guide the analysis, in part because then there tend to be a large number of models to select from.&amp;lt;ref name=&amp;quot;ref_fc8b4642&amp;quot; /&amp;gt;&lt;br /&gt;
# Overfitting/overtraining in supervised learning (e.g., neural network ).&amp;lt;ref name=&amp;quot;ref_fc8b4642&amp;quot; /&amp;gt;&lt;br /&gt;
# If the validation error increases(positive slope) while the training error steadily decreases(negative slope) then a situation of overfitting may have occurred.&amp;lt;ref name=&amp;quot;ref_fc8b4642&amp;quot; /&amp;gt;&lt;br /&gt;
# In fact, overfitting occurs in the real world all the time.&amp;lt;ref name=&amp;quot;ref_69cf4749&amp;quot;&amp;gt;[https://elitedatascience.com/overfitting-in-machine-learning Overfitting in Machine Learning: What It Is and How to Prevent It]&amp;lt;/ref&amp;gt;&lt;br /&gt;
# Detecting overfitting is useful, but it doesn’t solve the problem.&amp;lt;ref name=&amp;quot;ref_69cf4749&amp;quot; /&amp;gt;&lt;br /&gt;
# Overfitting is a modeling error that occurs when a function is too closely fit to a limited set of data points.&amp;lt;ref name=&amp;quot;ref_7b08f386&amp;quot;&amp;gt;[https://www.investopedia.com/terms/o/overfitting.asp Overfitting Definition]&amp;lt;/ref&amp;gt;&lt;br /&gt;
# However, when applied to data outside of the sample, such theorems may likely prove to be merely the overfitting of a model to what were in reality just chance occurrences.&amp;lt;ref name=&amp;quot;ref_7b08f386&amp;quot; /&amp;gt;&lt;br /&gt;
# As you&amp;#039;ll see later on, overfitting is caused by making a model more complex than necessary.&amp;lt;ref name=&amp;quot;ref_32cd43eb&amp;quot;&amp;gt;[https://developers.google.com/machine-learning/crash-course/generalization/peril-of-overfitting Generalization: Peril of Overfitting]&amp;lt;/ref&amp;gt;&lt;br /&gt;
# Overfitting happens when a machine learning model has become too attuned to the data on which it was trained and therefore loses its applicability to any other dataset.&amp;lt;ref name=&amp;quot;ref_f45dd5ae&amp;quot;&amp;gt;[https://www.datarobot.com/wiki/overfitting/ DataRobot Artificial Intelligence Wiki]&amp;lt;/ref&amp;gt;&lt;br /&gt;
# Overfitting causes the model to misrepresent the data from which it learned.&amp;lt;ref name=&amp;quot;ref_f45dd5ae&amp;quot; /&amp;gt;&lt;br /&gt;
# Picture2 — Regression Example for Overfitting and Underfitting, first Image represents model is Underfit.&amp;lt;ref name=&amp;quot;ref_13d4b4b2&amp;quot;&amp;gt;[https://medium.com/@itbodhi/overfitting-and-underfitting-in-machine-learning-models-76cb60dbdaf6 Overfitting and Underfitting. In Machine Leaning, model performance…]&amp;lt;/ref&amp;gt;&lt;br /&gt;
# The opposite of overfitting is underfitting.&amp;lt;ref name=&amp;quot;ref_14481710&amp;quot;&amp;gt;[https://www.tensorflow.org/tutorials/keras/overfit_and_underfit Overfit and underfit]&amp;lt;/ref&amp;gt;&lt;br /&gt;
# To prevent overfitting, the best solution is to use more complete training data.&amp;lt;ref name=&amp;quot;ref_14481710&amp;quot; /&amp;gt;&lt;br /&gt;
# As an exercise, you can create an even larger model, and see how quickly it begins overfitting.&amp;lt;ref name=&amp;quot;ref_14481710&amp;quot; /&amp;gt;&lt;br /&gt;
# In this example, typically, only the &amp;quot;Tiny&amp;quot; model manages to avoid overfitting altogether, and each of the larger models overfit the data more quickly.&amp;lt;ref name=&amp;quot;ref_14481710&amp;quot; /&amp;gt;&lt;br /&gt;
# Overfitting happens when a model learns the detail and noise in the training data to the extent that it negatively impacts the performance of the model on new data.&amp;lt;ref name=&amp;quot;ref_ce551f24&amp;quot;&amp;gt;[https://machinelearningmastery.com/overfitting-and-underfitting-with-machine-learning-algorithms/ Overfitting and Underfitting With Machine Learning Algorithms]&amp;lt;/ref&amp;gt;&lt;br /&gt;
# Overfitting is more likely with nonparametric and nonlinear models that have more flexibility when learning a target function.&amp;lt;ref name=&amp;quot;ref_ce551f24&amp;quot; /&amp;gt;&lt;br /&gt;
# For example, decision trees are a nonparametric machine learning algorithm that is very flexible and is subject to overfitting training data.&amp;lt;ref name=&amp;quot;ref_ce551f24&amp;quot; /&amp;gt;&lt;br /&gt;
# If we train for too long, the performance on the training dataset may continue to decrease because the model is overfitting and learning the irrelevant detail and noise in the training dataset.&amp;lt;ref name=&amp;quot;ref_ce551f24&amp;quot; /&amp;gt;&lt;br /&gt;
# The more we leave the model training the higher the chance of overfitting occurring.&amp;lt;ref name=&amp;quot;ref_0815c0db&amp;quot;&amp;gt;[https://towardsdatascience.com/what-are-overfitting-and-underfitting-in-machine-learning-a96b30864690 What Are Overfitting and Underfitting in Machine Learning?]&amp;lt;/ref&amp;gt;&lt;br /&gt;
# Overfitting (or high variance) leads to more bad than good.&amp;lt;ref name=&amp;quot;ref_0815c0db&amp;quot; /&amp;gt;&lt;br /&gt;
# As you probably expected, underfitting (i.e. high bias) is just as bad for generalization of the model as overfitting.&amp;lt;ref name=&amp;quot;ref_0815c0db&amp;quot; /&amp;gt;&lt;br /&gt;
# Depending on the model at hand, a performance that lies between overfitting and underfitting is more desirable.&amp;lt;ref name=&amp;quot;ref_0815c0db&amp;quot; /&amp;gt;&lt;br /&gt;
# Overfitting occurs when you achieve a good fit of your model on the training data, while it does not generalize well on new, unseen data.&amp;lt;ref name=&amp;quot;ref_725a978c&amp;quot;&amp;gt;[https://towardsdatascience.com/handling-overfitting-in-deep-learning-models-c760ee047c6e Handling overfitting in deep learning models]&amp;lt;/ref&amp;gt;&lt;br /&gt;
# We can identify overfitting by looking at validation metrics, like loss or accuracy.&amp;lt;ref name=&amp;quot;ref_725a978c&amp;quot; /&amp;gt;&lt;br /&gt;
# There are several manners in which we can reduce overfitting in deep learning models.&amp;lt;ref name=&amp;quot;ref_725a978c&amp;quot; /&amp;gt;&lt;br /&gt;
# Another way to reduce overfitting is to lower the capacity of the model to memorize the training data.&amp;lt;ref name=&amp;quot;ref_725a978c&amp;quot; /&amp;gt;&lt;br /&gt;
# In general there is a trade-off between the size of the space of distinct models that a learner can produce and the risk of overfitting.&amp;lt;ref name=&amp;quot;ref_1312c191&amp;quot;&amp;gt;[https://link.springer.com/10.1007%2F978-0-387-30164-8_623 Overfitting]&amp;lt;/ref&amp;gt;&lt;br /&gt;
# As the space of models between which the learner can select increases, the risk of overfitting will increase.&amp;lt;ref name=&amp;quot;ref_1312c191&amp;quot; /&amp;gt;&lt;br /&gt;
# This situation is achievable at a spot between overfitting and underfitting.&amp;lt;ref name=&amp;quot;ref_7623d28c&amp;quot;&amp;gt;[https://www.geeksforgeeks.org/underfitting-and-overfitting-in-machine-learning/ Underfitting and Overfitting in Machine Learning]&amp;lt;/ref&amp;gt;&lt;br /&gt;
# If it will learn for too long, the model will become more prone to overfitting due to the presence of noise and less useful details.&amp;lt;ref name=&amp;quot;ref_7623d28c&amp;quot; /&amp;gt;&lt;br /&gt;
# Overfitting is a term used in statistics that refers to a modeling error that occurs when a function corresponds too closely to a particular set of data.&amp;lt;ref name=&amp;quot;ref_2b62e1ed&amp;quot;&amp;gt;[https://corporatefinanceinstitute.com/resources/knowledge/other/overfitting/ Overview, Detection, and Prevention Methods]&amp;lt;/ref&amp;gt;&lt;br /&gt;
# Overfitting can be identified by checking validation metrics such as accuracy and loss.&amp;lt;ref name=&amp;quot;ref_2b62e1ed&amp;quot; /&amp;gt;&lt;br /&gt;
# The validation metrics usually increase until a point where they stagnate or start declining when the model is affected by overfitting.&amp;lt;ref name=&amp;quot;ref_2b62e1ed&amp;quot; /&amp;gt;&lt;br /&gt;
# Detecting overfitting is almost impossible before you test the data.&amp;lt;ref name=&amp;quot;ref_2b62e1ed&amp;quot; /&amp;gt;&lt;br /&gt;
# What Is Overfitting In A Machine Learning Project?&amp;lt;ref name=&amp;quot;ref_d5b36abd&amp;quot;&amp;gt;[https://medium.com/fintechexplained/the-problem-of-overfitting-and-how-to-resolve-it-1eb9456b1dfd The Problem Of Overfitting And How To Resolve It]&amp;lt;/ref&amp;gt;&lt;br /&gt;
# How Can We Detect Overfitting?&amp;lt;ref name=&amp;quot;ref_d5b36abd&amp;quot; /&amp;gt;&lt;br /&gt;
# Overfitting is when your model has over-trained itself on the data that is fed to train it.&amp;lt;ref name=&amp;quot;ref_d5b36abd&amp;quot; /&amp;gt;&lt;br /&gt;
# These parameters are set to smaller values to eliminate overfitting.&amp;lt;ref name=&amp;quot;ref_d5b36abd&amp;quot; /&amp;gt;&lt;br /&gt;
# There is terminology to describe how well a machine learning model learns and generalizes to new data, this is overfitting and underfitting.&amp;lt;ref name=&amp;quot;ref_674055bd&amp;quot;&amp;gt;[https://datascience.foundation/sciencewhitepaper/underfitting-and-overfitting-in-machine-learning Underfitting and Overfitting in Machine Learning]&amp;lt;/ref&amp;gt;&lt;br /&gt;
# Let’s understand what is Best Fit, Overfitting and Underfitting?&amp;lt;ref name=&amp;quot;ref_674055bd&amp;quot; /&amp;gt;&lt;br /&gt;
# Overfitting refers to the scenario where a machine learning model can’t generalize or fit well on unseen dataset.&amp;lt;ref name=&amp;quot;ref_674055bd&amp;quot; /&amp;gt;&lt;br /&gt;
# Overfitting is a term used in statistics that refers to a modeling error that occurs when a function corresponds too closely to a dataset.&amp;lt;ref name=&amp;quot;ref_674055bd&amp;quot; /&amp;gt;&lt;br /&gt;
# In this section we will look at some techniques for preventing our model becoming too powerful (overfitting).&amp;lt;ref name=&amp;quot;ref_5aef9701&amp;quot;&amp;gt;[https://cnl.salk.edu/~schraudo/teach/NNcourse/overfitting.html Overfitting]&amp;lt;/ref&amp;gt;&lt;br /&gt;
# Deep learning methodology has revealed a surprising statistical phenomenon: overfitting can perform well.&amp;lt;ref name=&amp;quot;ref_5424a00c&amp;quot;&amp;gt;[https://www.pnas.org/content/117/48/30063 Benign overfitting in linear regression]&amp;lt;/ref&amp;gt;&lt;br /&gt;
# The following theorem shows that the kind of overparameterization that is essential for benign overfitting requires Σ to have a heavy tail.&amp;lt;ref name=&amp;quot;ref_5424a00c&amp;quot; /&amp;gt;&lt;br /&gt;
# The phenomenon of benign overfitting was first observed in deep neural networks.&amp;lt;ref name=&amp;quot;ref_5424a00c&amp;quot; /&amp;gt;&lt;br /&gt;
# However, the intuition from the linear setting suggests that truncating to a finite-dimensional space might be important for good statistical performance in the overfitting regime.&amp;lt;ref name=&amp;quot;ref_5424a00c&amp;quot; /&amp;gt;&lt;br /&gt;
# Your model is overfitting your training data when you see that the model performs well on the training data but does not perform well on the evaluation data.&amp;lt;ref name=&amp;quot;ref_219c9bad&amp;quot;&amp;gt;[https://docs.aws.amazon.com/machine-learning/latest/dg/model-fit-underfitting-vs-overfitting.html Model Fit: Underfitting vs. Overfitting]&amp;lt;/ref&amp;gt;&lt;br /&gt;
# If your model is overfitting the training data, it makes sense to take actions that reduce model flexibility.&amp;lt;ref name=&amp;quot;ref_219c9bad&amp;quot; /&amp;gt;&lt;br /&gt;
# Overfitting occurs when a model tries to predict a trend in data that is too noisy.&amp;lt;ref name=&amp;quot;ref_9ec4c26d&amp;quot;&amp;gt;[https://www.kdnuggets.com/2019/12/5-techniques-prevent-overfitting-neural-networks.html 5 Techniques to Prevent Overfitting in Neural Networks]&amp;lt;/ref&amp;gt;&lt;br /&gt;
# The first step when dealing with overfitting is to decrease the complexity of the model.&amp;lt;ref name=&amp;quot;ref_9ec4c26d&amp;quot; /&amp;gt;&lt;br /&gt;
# This helps in increasing the dataset size and thus reduce overfitting.&amp;lt;ref name=&amp;quot;ref_9ec4c26d&amp;quot; /&amp;gt;&lt;br /&gt;
# So which technique is better at avoiding overfitting?&amp;lt;ref name=&amp;quot;ref_9ec4c26d&amp;quot; /&amp;gt;&lt;br /&gt;
# Example 7.15 showed how complex models can lead to overfitting the data.&amp;lt;ref name=&amp;quot;ref_19def9d9&amp;quot;&amp;gt;[https://artint.info/2e/html/ArtInt2e.Ch7.S4.html 7.4 Overfitting‣ Chapter 7 Supervised Machine Learning ‣ Artificial Intelligence: Foundations of Computational Agents, 2nd Edition]&amp;lt;/ref&amp;gt;&lt;br /&gt;
# Overfitting results in overconfidence, where the learner is more confident in its prediction than the data warrants.&amp;lt;ref name=&amp;quot;ref_19def9d9&amp;quot; /&amp;gt;&lt;br /&gt;
# Overfitting a model is a condition where a statistical model begins to describe the random error in the data rather than the relationships between variables.&amp;lt;ref name=&amp;quot;ref_90812847&amp;quot;&amp;gt;[https://statisticsbyjim.com/regression/overfitting-regression-models/ Overfitting Regression Models: Problems, Detection, and Avoidance]&amp;lt;/ref&amp;gt;&lt;br /&gt;
# In regression analysis, overfitting can produce misleading R-squared values, regression coefficients, and p-values.&amp;lt;ref name=&amp;quot;ref_90812847&amp;quot; /&amp;gt;&lt;br /&gt;
# I’d really like these problems to sink in because overfitting often occurs when analysts chase a high R-squared.&amp;lt;ref name=&amp;quot;ref_90812847&amp;quot; /&amp;gt;&lt;br /&gt;
# Overfitting a regression model is similar to the example above.&amp;lt;ref name=&amp;quot;ref_90812847&amp;quot; /&amp;gt;&lt;br /&gt;
# Overfitting in Machine Learning is one such deficiency in Machine Learning that hinders the accuracy as well as the performance of the model.&amp;lt;ref name=&amp;quot;ref_c134bca9&amp;quot;&amp;gt;[https://www.edureka.co/blog/overfitting-in-machine-learning/ What Is Overfitting In Machine Learning? - ML Algorithms]&amp;lt;/ref&amp;gt;&lt;br /&gt;
# This is what overfitting looks like.&amp;lt;ref name=&amp;quot;ref_c134bca9&amp;quot; /&amp;gt;&lt;br /&gt;
# In order to avoid overfitting, we could stop the training at an earlier stage.&amp;lt;ref name=&amp;quot;ref_c134bca9&amp;quot; /&amp;gt;&lt;br /&gt;
# The main challenge with overfitting is to estimate the accuracy of the performance of our model with new data.&amp;lt;ref name=&amp;quot;ref_c134bca9&amp;quot; /&amp;gt;&lt;br /&gt;
# This example demonstrates the problems of underfitting and overfitting and how we can use linear regression with polynomial features to approximate nonlinear functions.&amp;lt;ref name=&amp;quot;ref_abe148bc&amp;quot;&amp;gt;[http://scikit-learn.org/stable/auto_examples/model_selection/plot_underfitting_overfitting.html Underfitting vs. Overfitting — scikit-learn 0.23.2 documentation]&amp;lt;/ref&amp;gt;&lt;br /&gt;
# We evaluate quantitatively overfitting / underfitting by using cross-validation.&amp;lt;ref name=&amp;quot;ref_abe148bc&amp;quot; /&amp;gt;&lt;br /&gt;
# Let’s start with the most common and complex problem: overfitting.&amp;lt;ref name=&amp;quot;ref_c6dcacce&amp;quot;&amp;gt;[https://allcloud.io/blog/how-to-solve-underfitting-and-overfitting-data-models/ How to Solve Underfitting and Overfitting Data Models]&amp;lt;/ref&amp;gt;&lt;br /&gt;
# Your model is overfitting when it fails to generalize to new data.&amp;lt;ref name=&amp;quot;ref_c6dcacce&amp;quot; /&amp;gt;&lt;br /&gt;
# It is important to understand that overfitting is a complex problem.&amp;lt;ref name=&amp;quot;ref_c6dcacce&amp;quot; /&amp;gt;&lt;br /&gt;
# The algorithms you use include by default regularization parameters meant to prevent overfitting.&amp;lt;ref name=&amp;quot;ref_c6dcacce&amp;quot; /&amp;gt;&lt;br /&gt;
# What is overfitting in trading?&amp;lt;ref name=&amp;quot;ref_ce5d3dd3&amp;quot;&amp;gt;[https://algotrading101.com/learn/what-is-overfitting-in-trading/ What is Overfitting in Trading?]&amp;lt;/ref&amp;gt;&lt;br /&gt;
# Another way to reduce overfitting is by running out-of-sample optimisations.&amp;lt;ref name=&amp;quot;ref_ce5d3dd3&amp;quot; /&amp;gt;&lt;br /&gt;
# Overfitting is a problem in machine learning that introduces errors based on noise and meaningless data into prediction or classification.&amp;lt;ref name=&amp;quot;ref_a7344af0&amp;quot;&amp;gt;[https://radiopaedia.org/articles/overfitting Radiology Reference Article]&amp;lt;/ref&amp;gt;&lt;br /&gt;
# Strictly speaking, overfitting applies to fitting a polynomial curve to data points where the polynomial suggests a more complex model than the accurate one.&amp;lt;ref name=&amp;quot;ref_a7344af0&amp;quot; /&amp;gt;&lt;br /&gt;
# There are many techniques to correct for overfitting including regularization.&amp;lt;ref name=&amp;quot;ref_a7344af0&amp;quot; /&amp;gt;&lt;br /&gt;
# Can you explain what is underfitting and overfitting in the context of machine learning?&amp;lt;ref name=&amp;quot;ref_42aea1e2&amp;quot;&amp;gt;[https://www.analyticsvidhya.com/blog/2020/02/underfitting-overfitting-best-fitting-machine-learning/ Overfitting And Underfitting in Machine Learning]&amp;lt;/ref&amp;gt;&lt;br /&gt;
# Here’s my personal experience – ask any seasoned data scientist about this, they typically start talking about some array of fancy terms like Overfitting, Underfitting, Bias, and Variance.&amp;lt;ref name=&amp;quot;ref_42aea1e2&amp;quot; /&amp;gt;&lt;br /&gt;
# For example, non-parametric models like decision trees, KNN, and other tree-based algorithms are very prone to overfitting.&amp;lt;ref name=&amp;quot;ref_42aea1e2&amp;quot; /&amp;gt;&lt;br /&gt;
# These models can learn very complex relations which can result in overfitting.&amp;lt;ref name=&amp;quot;ref_42aea1e2&amp;quot; /&amp;gt;&lt;br /&gt;
# The fits shown exemplify underfitting (gray diagonal line, linear fit), reasonable fitting (black curve, third-order polynomial) and overfitting (dashed curve, fifth-order polynomial).&amp;lt;ref name=&amp;quot;ref_8e427fa2&amp;quot;&amp;gt;[https://www.nature.com/articles/nmeth.3968 Model selection and overfitting]&amp;lt;/ref&amp;gt;&lt;br /&gt;
# To illustrate how to choose a model and avoid under- and overfitting, let us return to last month&amp;#039;s diagnostic test to predict a patient&amp;#039;s disease status4.&amp;lt;ref name=&amp;quot;ref_8e427fa2&amp;quot; /&amp;gt;&lt;br /&gt;
# This trend is misleading—we were merely fitting to noise and overfitting the training set.&amp;lt;ref name=&amp;quot;ref_8e427fa2&amp;quot; /&amp;gt;&lt;br /&gt;
# the effects of overfitting become noticeable (Fig. 2b).&amp;lt;ref name=&amp;quot;ref_8e427fa2&amp;quot; /&amp;gt;&lt;br /&gt;
# When you train a neural network, you have to avoid overfitting.&amp;lt;ref name=&amp;quot;ref_7f437d22&amp;quot;&amp;gt;[https://www.unite.ai/what-is-overfitting/ What is Overfitting?]&amp;lt;/ref&amp;gt;&lt;br /&gt;
# That’s a quick definition of overfitting, but let’s go over the concept of overfitting in more detail.&amp;lt;ref name=&amp;quot;ref_7f437d22&amp;quot; /&amp;gt;&lt;br /&gt;
# Before we delve too deeply into overfitting, it might be helpful to take a look at the concept of underfitting and “fit” generally.&amp;lt;ref name=&amp;quot;ref_7f437d22&amp;quot; /&amp;gt;&lt;br /&gt;
# Creating a model that has learned the patterns of the training data too well is what causes overfitting.&amp;lt;ref name=&amp;quot;ref_7f437d22&amp;quot; /&amp;gt;&lt;br /&gt;
# Since computation is (relatively) cheap, and overfitting is much easier to detect, it is more straightforward to build a high-capacity model and use known techniques to prevent overfitting.&amp;lt;ref name=&amp;quot;ref_817251d4&amp;quot;&amp;gt;[https://www.cs.toronto.edu/~lczhang/360/lec/w05/overfit.html overfit]&amp;lt;/ref&amp;gt;&lt;br /&gt;
# These are only some of the techniques for preventing overfitting.&amp;lt;ref name=&amp;quot;ref_817251d4&amp;quot; /&amp;gt;&lt;br /&gt;
# Since we are studying overfitting, I will artificially reduce the number of training examples to 200.&amp;lt;ref name=&amp;quot;ref_817251d4&amp;quot; /&amp;gt;&lt;br /&gt;
# Focusing on Applicability Domain and Overfitting by Variable Selection.&amp;lt;ref name=&amp;quot;ref_af9b86b9&amp;quot;&amp;gt;[https://pubs.acs.org/doi/10.1021/ci0342472 The Problem of Overfitting]&amp;lt;/ref&amp;gt;&lt;br /&gt;
# Large networks are also slow to use, making it difficult to deal with overfitting by combining the predictions of many different large neural nets at test time.&amp;lt;ref name=&amp;quot;ref_a8ae4cf5&amp;quot;&amp;gt;[https://jmlr.org/papers/v15/srivastava14a.html Dropout: A Simple Way to Prevent Neural Networks from Overfitting]&amp;lt;/ref&amp;gt;&lt;br /&gt;
# This significantly reduces overfitting and gives major improvements over other regularization methods.&amp;lt;ref name=&amp;quot;ref_a8ae4cf5&amp;quot; /&amp;gt;&lt;br /&gt;
# Ensembling many diverse models can help mitigate overfitting in some cases.&amp;lt;ref name=&amp;quot;ref_8580818b&amp;quot;&amp;gt;[https://datasciencebowl.com/overfitting/ The Data Science Bowl]&amp;lt;/ref&amp;gt;&lt;br /&gt;
# So, overfitting in this case is not a bad idea when the number of test set rows (observations) is very large (in the billions) and the number of columns (features) is less than the number of rows.&amp;lt;ref name=&amp;quot;ref_8580818b&amp;quot; /&amp;gt;&lt;br /&gt;
# The best way to avoid overfitting in data science is to only make a single Kaggle entry based upon local CV.&amp;lt;ref name=&amp;quot;ref_8580818b&amp;quot; /&amp;gt;&lt;br /&gt;
# This work exposes the overfitting that emerges in such optimization.&amp;lt;ref name=&amp;quot;ref_5b09f82d&amp;quot;&amp;gt;[https://www.sciencedirect.com/science/article/pii/S1568494617302806 A study of overfitting in optimization of a manufacturing quality control procedure]&amp;lt;/ref&amp;gt;&lt;br /&gt;
# Results on two distinct quality control problems show that optimization amplifies overfitting, i.e., the single cross-validation error estimate for the optimized models is overly optimistic.&amp;lt;ref name=&amp;quot;ref_5b09f82d&amp;quot; /&amp;gt;&lt;br /&gt;
# To prevent overfitting, the best solution is to use more training data.&amp;lt;ref name=&amp;quot;ref_fa8c1c61&amp;quot;&amp;gt;[https://tensorflow.rstudio.com/tutorials/beginners/basic-ml/tutorial_overfit_underfit/ Tutorial: Overfitting and Underfitting]&amp;lt;/ref&amp;gt;&lt;br /&gt;
# Before we go on to talk about some more simple classifier methods, we need to talk about overfitting.&amp;lt;ref name=&amp;quot;ref_e4362328&amp;quot;&amp;gt;[https://www.futurelearn.com/courses/data-mining-with-weka/0/steps/25389 Overfitting]&amp;lt;/ref&amp;gt;&lt;br /&gt;
# That’s a good example of overfitting.&amp;lt;ref name=&amp;quot;ref_e4362328&amp;quot; /&amp;gt;&lt;br /&gt;
# Overfitting is a general phenomenon that plagues all machine learning methods.&amp;lt;ref name=&amp;quot;ref_e4362328&amp;quot; /&amp;gt;&lt;br /&gt;
===소스===&lt;br /&gt;
 &amp;lt;references /&amp;gt;&lt;/div&gt;</summary>
		<author><name>Pythagoras0</name></author>
	</entry>
</feed>