<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="ko">
	<id>https://wiki.mathnt.net/index.php?action=history&amp;feed=atom&amp;title=%EC%98%A4%EC%B0%A8%EC%97%AD%EC%A0%84%ED%8C%8C%EB%B2%95</id>
	<title>오차역전파법 - 편집 역사</title>
	<link rel="self" type="application/atom+xml" href="https://wiki.mathnt.net/index.php?action=history&amp;feed=atom&amp;title=%EC%98%A4%EC%B0%A8%EC%97%AD%EC%A0%84%ED%8C%8C%EB%B2%95"/>
	<link rel="alternate" type="text/html" href="https://wiki.mathnt.net/index.php?title=%EC%98%A4%EC%B0%A8%EC%97%AD%EC%A0%84%ED%8C%8C%EB%B2%95&amp;action=history"/>
	<updated>2026-04-04T15:41:36Z</updated>
	<subtitle>이 문서의 편집 역사</subtitle>
	<generator>MediaWiki 1.35.0</generator>
	<entry>
		<id>https://wiki.mathnt.net/index.php?title=%EC%98%A4%EC%B0%A8%EC%97%AD%EC%A0%84%ED%8C%8C%EB%B2%95&amp;diff=51036&amp;oldid=prev</id>
		<title>2021년 2월 17일 (수) 07:41에 Pythagoras0님의 편집</title>
		<link rel="alternate" type="text/html" href="https://wiki.mathnt.net/index.php?title=%EC%98%A4%EC%B0%A8%EC%97%AD%EC%A0%84%ED%8C%8C%EB%B2%95&amp;diff=51036&amp;oldid=prev"/>
		<updated>2021-02-17T07:41:45Z</updated>

		<summary type="html">&lt;p&gt;&lt;/p&gt;
&lt;table class=&quot;diff diff-contentalign-left diff-editfont-monospace&quot; data-mw=&quot;interface&quot;&gt;
				&lt;col class=&quot;diff-marker&quot; /&gt;
				&lt;col class=&quot;diff-content&quot; /&gt;
				&lt;col class=&quot;diff-marker&quot; /&gt;
				&lt;col class=&quot;diff-content&quot; /&gt;
				&lt;tr class=&quot;diff-title&quot; lang=&quot;ko&quot;&gt;
				&lt;td colspan=&quot;2&quot; style=&quot;background-color: #fff; color: #202122; text-align: center;&quot;&gt;← 이전 판&lt;/td&gt;
				&lt;td colspan=&quot;2&quot; style=&quot;background-color: #fff; color: #202122; text-align: center;&quot;&gt;2021년 2월 17일 (수) 07:41 판&lt;/td&gt;
				&lt;/tr&gt;&lt;tr&gt;&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot; id=&quot;mw-diff-left-l82&quot; &gt;82번째 줄:&lt;/td&gt;
&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot;&gt;82번째 줄:&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&#039;diff-marker&#039;&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;  &amp;lt;references /&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;td class=&#039;diff-marker&#039;&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;  &amp;lt;references /&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&#039;diff-marker&#039;&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;td class=&#039;diff-marker&#039;&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&#039;diff-marker&#039;&gt;−&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;== 메타데이터 ==&lt;/div&gt;&lt;/td&gt;&lt;td class=&#039;diff-marker&#039;&gt;+&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;==메타데이터==&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&#039;diff-marker&#039;&gt;−&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt; &lt;/div&gt;&lt;/td&gt;&lt;td colspan=&quot;2&quot;&gt; &lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&#039;diff-marker&#039;&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;===위키데이터===&lt;/div&gt;&lt;/td&gt;&lt;td class=&#039;diff-marker&#039;&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;===위키데이터===&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&#039;diff-marker&#039;&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;* ID :  [https://www.wikidata.org/wiki/Q798503 Q798503]&lt;/div&gt;&lt;/td&gt;&lt;td class=&#039;diff-marker&#039;&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;* ID :  [https://www.wikidata.org/wiki/Q798503 Q798503]&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td colspan=&quot;2&quot;&gt; &lt;/td&gt;&lt;td class=&#039;diff-marker&#039;&gt;+&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;===Spacy 패턴 목록===&lt;/ins&gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td colspan=&quot;2&quot;&gt; &lt;/td&gt;&lt;td class=&#039;diff-marker&#039;&gt;+&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;* [{&amp;#039;LEMMA&amp;#039;: &amp;#039;backpropagation&amp;#039;}]&lt;/ins&gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td colspan=&quot;2&quot;&gt; &lt;/td&gt;&lt;td class=&#039;diff-marker&#039;&gt;+&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;* [{&amp;#039;LOWER&amp;#039;: &amp;#039;backward&amp;#039;}, {&amp;#039;LOWER&amp;#039;: &amp;#039;propagation&amp;#039;}, {&amp;#039;LOWER&amp;#039;: &amp;#039;of&amp;#039;}, {&amp;#039;LEMMA&amp;#039;: &amp;#039;error&amp;#039;}]&lt;/ins&gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td colspan=&quot;2&quot;&gt; &lt;/td&gt;&lt;td class=&#039;diff-marker&#039;&gt;+&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;* [{&amp;#039;LEMMA&amp;#039;: &amp;#039;backprop&amp;#039;}]&lt;/ins&gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td colspan=&quot;2&quot;&gt; &lt;/td&gt;&lt;td class=&#039;diff-marker&#039;&gt;+&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;* [{&amp;#039;LEMMA&amp;#039;: &amp;#039;BP&amp;#039;}]&lt;/ins&gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;/table&gt;</summary>
		<author><name>Pythagoras0</name></author>
	</entry>
	<entry>
		<id>https://wiki.mathnt.net/index.php?title=%EC%98%A4%EC%B0%A8%EC%97%AD%EC%A0%84%ED%8C%8C%EB%B2%95&amp;diff=46831&amp;oldid=prev</id>
		<title>Pythagoras0: /* 메타데이터 */ 새 문단</title>
		<link rel="alternate" type="text/html" href="https://wiki.mathnt.net/index.php?title=%EC%98%A4%EC%B0%A8%EC%97%AD%EC%A0%84%ED%8C%8C%EB%B2%95&amp;diff=46831&amp;oldid=prev"/>
		<updated>2020-12-26T12:01:18Z</updated>

		<summary type="html">&lt;p&gt;&lt;span dir=&quot;auto&quot;&gt;&lt;span class=&quot;autocomment&quot;&gt;메타데이터: &lt;/span&gt; 새 문단&lt;/span&gt;&lt;/p&gt;
&lt;table class=&quot;diff diff-contentalign-left diff-editfont-monospace&quot; data-mw=&quot;interface&quot;&gt;
				&lt;col class=&quot;diff-marker&quot; /&gt;
				&lt;col class=&quot;diff-content&quot; /&gt;
				&lt;col class=&quot;diff-marker&quot; /&gt;
				&lt;col class=&quot;diff-content&quot; /&gt;
				&lt;tr class=&quot;diff-title&quot; lang=&quot;ko&quot;&gt;
				&lt;td colspan=&quot;2&quot; style=&quot;background-color: #fff; color: #202122; text-align: center;&quot;&gt;← 이전 판&lt;/td&gt;
				&lt;td colspan=&quot;2&quot; style=&quot;background-color: #fff; color: #202122; text-align: center;&quot;&gt;2020년 12월 26일 (토) 12:01 판&lt;/td&gt;
				&lt;/tr&gt;&lt;tr&gt;&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot; id=&quot;mw-diff-left-l81&quot; &gt;81번째 줄:&lt;/td&gt;
&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot;&gt;81번째 줄:&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&#039;diff-marker&#039;&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;===소스===&lt;/div&gt;&lt;/td&gt;&lt;td class=&#039;diff-marker&#039;&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;===소스===&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&#039;diff-marker&#039;&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;  &amp;lt;references /&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;td class=&#039;diff-marker&#039;&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;  &amp;lt;references /&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td colspan=&quot;2&quot;&gt; &lt;/td&gt;&lt;td class=&#039;diff-marker&#039;&gt;+&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;&lt;/ins&gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td colspan=&quot;2&quot;&gt; &lt;/td&gt;&lt;td class=&#039;diff-marker&#039;&gt;+&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;== 메타데이터 ==&lt;/ins&gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td colspan=&quot;2&quot;&gt; &lt;/td&gt;&lt;td class=&#039;diff-marker&#039;&gt;+&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;&lt;/ins&gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td colspan=&quot;2&quot;&gt; &lt;/td&gt;&lt;td class=&#039;diff-marker&#039;&gt;+&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;===위키데이터===&lt;/ins&gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td colspan=&quot;2&quot;&gt; &lt;/td&gt;&lt;td class=&#039;diff-marker&#039;&gt;+&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;* ID :  [https://www.wikidata.org/wiki/Q798503 Q798503]&lt;/ins&gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;/table&gt;</summary>
		<author><name>Pythagoras0</name></author>
	</entry>
	<entry>
		<id>https://wiki.mathnt.net/index.php?title=%EC%98%A4%EC%B0%A8%EC%97%AD%EC%A0%84%ED%8C%8C%EB%B2%95&amp;diff=46627&amp;oldid=prev</id>
		<title>Pythagoras0: /* 노트 */ 새 문단</title>
		<link rel="alternate" type="text/html" href="https://wiki.mathnt.net/index.php?title=%EC%98%A4%EC%B0%A8%EC%97%AD%EC%A0%84%ED%8C%8C%EB%B2%95&amp;diff=46627&amp;oldid=prev"/>
		<updated>2020-12-23T07:49:27Z</updated>

		<summary type="html">&lt;p&gt;&lt;span dir=&quot;auto&quot;&gt;&lt;span class=&quot;autocomment&quot;&gt;노트: &lt;/span&gt; 새 문단&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;&lt;b&gt;새 문서&lt;/b&gt;&lt;/p&gt;&lt;div&gt;== 노트 ==&lt;br /&gt;
&lt;br /&gt;
===위키데이터===&lt;br /&gt;
* ID :  [https://www.wikidata.org/wiki/Q798503 Q798503]&lt;br /&gt;
===말뭉치===&lt;br /&gt;
# There is no shortage of papers online that attempt to explain how backpropagation works, but few that include an example with actual numbers.&amp;lt;ref name=&amp;quot;ref_12326178&amp;quot;&amp;gt;[https://mattmazur.com/2015/03/17/a-step-by-step-backpropagation-example/ A Step by Step Backpropagation Example]&amp;lt;/ref&amp;gt;&lt;br /&gt;
# The backpropagation algorithm was originally introduced in the 1970s, but its importance wasn&amp;#039;t fully appreciated until a famous 1986 paper by David Rumelhart, Geoffrey Hinton, and Ronald Williams.&amp;lt;ref name=&amp;quot;ref_ebf3a9bb&amp;quot;&amp;gt;[http://neuralnetworksanddeeplearning.com/chap2.html Neural networks and deep learning]&amp;lt;/ref&amp;gt;&lt;br /&gt;
# If you&amp;#039;re not crazy about mathematics you may be tempted to skip the chapter, and to treat backpropagation as a black box whose details you&amp;#039;re willing to ignore.&amp;lt;ref name=&amp;quot;ref_ebf3a9bb&amp;quot; /&amp;gt;&lt;br /&gt;
# And so backpropagation isn&amp;#039;t just a fast algorithm for learning.&amp;lt;ref name=&amp;quot;ref_ebf3a9bb&amp;quot; /&amp;gt;&lt;br /&gt;
# I&amp;#039;ve written the rest of the book to be accessible even if you treat backpropagation as a black box.&amp;lt;ref name=&amp;quot;ref_ebf3a9bb&amp;quot; /&amp;gt;&lt;br /&gt;
# Many neural network books (Haykin, 1994; Bishop, 1995; Ripley, 1996) do not formulate backpropagation in vector-matrix terms.&amp;lt;ref name=&amp;quot;ref_2a0be238&amp;quot;&amp;gt;[https://www.sciencedirect.com/topics/computer-science/backpropagation-algorithm Backpropagation Algorithm - an overview]&amp;lt;/ref&amp;gt;&lt;br /&gt;
# Hinton and Salakhutdinov (2006) noted that it has been known since the 1980s that deep autoencoders, optimized through backpropagation, could be effective for nonlinear dimensionality reduction.&amp;lt;ref name=&amp;quot;ref_2a0be238&amp;quot; /&amp;gt;&lt;br /&gt;
# In this chapter we discuss a popular learning method capable of handling such large learning problems—the backpropagation algorithm.&amp;lt;ref name=&amp;quot;ref_2ebda9af&amp;quot;&amp;gt;[https://link.springer.com/chapter/10.1007/978-3-642-61068-4_7 The Backpropagation Algorithm]&amp;lt;/ref&amp;gt;&lt;br /&gt;
# In other words, backpropagation aims to minimize the cost function by adjusting network’s weights and biases.&amp;lt;ref name=&amp;quot;ref_616ab8c4&amp;quot;&amp;gt;[https://towardsdatascience.com/understanding-backpropagation-algorithm-7bb3aa2f95fd Understanding Backpropagation Algorithm]&amp;lt;/ref&amp;gt;&lt;br /&gt;
# One way to train our model is called as Backpropagation.&amp;lt;ref name=&amp;quot;ref_ee2d04fd&amp;quot;&amp;gt;[https://www.edureka.co/blog/backpropagation/ Training A Neural Network]&amp;lt;/ref&amp;gt;&lt;br /&gt;
# The Backpropagation algorithm looks for the minimum value of the error function in weight space using a technique called the delta rule or gradient descent.&amp;lt;ref name=&amp;quot;ref_ee2d04fd&amp;quot; /&amp;gt;&lt;br /&gt;
# The structure of a BP network is shown in Figure 12.4.&amp;lt;ref name=&amp;quot;ref_60db9508&amp;quot;&amp;gt;[https://www.sciencedirect.com/topics/engineering/backpropagation-algorithm Backpropagation Algorithm - an overview]&amp;lt;/ref&amp;gt;&lt;br /&gt;
# The BP network is known from the abbreviation.&amp;lt;ref name=&amp;quot;ref_60db9508&amp;quot; /&amp;gt;&lt;br /&gt;
# The BP algorithm can be summarized by the steps below: (1) Initialize all weightings and thresholds.&amp;lt;ref name=&amp;quot;ref_60db9508&amp;quot; /&amp;gt;&lt;br /&gt;
# The project describes teaching process of multi-layer neural network employing backpropagation algorithm.&amp;lt;ref name=&amp;quot;ref_263b89d5&amp;quot;&amp;gt;[http://home.agh.edu.pl/~vlsi/AI/backp_t_en/backprop.html Backpropagation]&amp;lt;/ref&amp;gt;&lt;br /&gt;
# Only in the middle eighties the backpropagation algorithm has been worked out.&amp;lt;ref name=&amp;quot;ref_263b89d5&amp;quot; /&amp;gt;&lt;br /&gt;
# It is one kind of backpropagation network which produces a mapping of a static input for static output.&amp;lt;ref name=&amp;quot;ref_78df0a58&amp;quot;&amp;gt;[https://www.guru99.com/backpropogation-neural-network.html Back Propagation Neural Network: Explained With Simple Example]&amp;lt;/ref&amp;gt;&lt;br /&gt;
# Recurrent backpropagation is fed forward until a fixed value is achieved.&amp;lt;ref name=&amp;quot;ref_78df0a58&amp;quot; /&amp;gt;&lt;br /&gt;
# Backpropagation is an algorithm commonly used to train neural networks.&amp;lt;ref name=&amp;quot;ref_afb68f78&amp;quot;&amp;gt;[https://missinglink.ai/guides/neural-network-concepts/backpropagation-neural-networks-process-examples-code-minus-math/ Backpropagation in Neural Networks: Process, Example &amp;amp; Code]&amp;lt;/ref&amp;gt;&lt;br /&gt;
# Backpropagation is simply an algorithm which performs a highly efficient search for the optimal weight values, using the gradient descent technique.&amp;lt;ref name=&amp;quot;ref_afb68f78&amp;quot; /&amp;gt;&lt;br /&gt;
# We’ll explain the backpropagation process in the abstract, with very simple math.&amp;lt;ref name=&amp;quot;ref_afb68f78&amp;quot; /&amp;gt;&lt;br /&gt;
# The backpropagation algorithm calculates how much the final output values, o1 and o2, are affected by each of the weights.&amp;lt;ref name=&amp;quot;ref_afb68f78&amp;quot; /&amp;gt;&lt;br /&gt;
# Generalizations of backpropagation exists for other artificial neural networks (ANNs), and for functions generally.&amp;lt;ref name=&amp;quot;ref_6b8aa611&amp;quot;&amp;gt;[https://en.wikipedia.org/wiki/Backpropagation Backpropagation]&amp;lt;/ref&amp;gt;&lt;br /&gt;
# Backpropagation computes the gradient in weight space of a feedforward neural network, with respect to a loss function.&amp;lt;ref name=&amp;quot;ref_6b8aa611&amp;quot; /&amp;gt;&lt;br /&gt;
# In the derivation of backpropagation, other intermediate quantities are used; they are introduced as needed below.&amp;lt;ref name=&amp;quot;ref_6b8aa611&amp;quot; /&amp;gt;&lt;br /&gt;
# This is the reason why backpropagation requires the activation function to be differentiable.&amp;lt;ref name=&amp;quot;ref_6b8aa611&amp;quot; /&amp;gt;&lt;br /&gt;
# Thus, for the purposes of derivation, the backpropagation algorithm will concern itself with only one input-output pair.&amp;lt;ref name=&amp;quot;ref_8795944e&amp;quot;&amp;gt;[https://brilliant.org/wiki/backpropagation/ Brilliant Math &amp;amp; Science Wiki]&amp;lt;/ref&amp;gt;&lt;br /&gt;
# This equation is where backpropagation gets its name.&amp;lt;ref name=&amp;quot;ref_8795944e&amp;quot; /&amp;gt;&lt;br /&gt;
# The principle of the backpropagation approach is to model a given function by modifying internal weightings of input signals to produce an expected output signal.&amp;lt;ref name=&amp;quot;ref_38d3d640&amp;quot;&amp;gt;[https://machinelearningmastery.com/implement-backpropagation-algorithm-scratch-python/ How to Code a Neural Network with Backpropagation In Python (from scratch)]&amp;lt;/ref&amp;gt;&lt;br /&gt;
# Technically, the backpropagation algorithm is a method for training the weights in a multilayer feed-forward neural network.&amp;lt;ref name=&amp;quot;ref_38d3d640&amp;quot; /&amp;gt;&lt;br /&gt;
# Running the example prints the network after the backpropagation of error is complete.&amp;lt;ref name=&amp;quot;ref_38d3d640&amp;quot; /&amp;gt;&lt;br /&gt;
# However, the main learning mechanism behind these advances – error backpropagation – appears to be at odds with neurobiology.&amp;lt;ref name=&amp;quot;ref_deb80be4&amp;quot;&amp;gt;[https://papers.nips.cc/paper/8089-dendritic-cortical-microcircuits-approximate-the-backpropagation-algorithm Paper]&amp;lt;/ref&amp;gt;&lt;br /&gt;
# We demonstrate the learning capabilities of the model in regression and classification tasks, and show analytically that it approximates the error backpropagation algorithm.&amp;lt;ref name=&amp;quot;ref_deb80be4&amp;quot; /&amp;gt;&lt;br /&gt;
# Examining the algorithm you can see why it&amp;#039;s called backpropagation.&amp;lt;ref name=&amp;quot;ref_284021ee&amp;quot;&amp;gt;[https://eng.libretexts.org/Bookshelves/Computer_Science/Book%3A_Neural_Networks_and_Deep_Learning_(Nielsen)/02%3A_How_the_backpropagation_algorithm_works/2.03%3A_The_backpropagation_algorithm 2.3: The backpropagation algorithm]&amp;lt;/ref&amp;gt;&lt;br /&gt;
# Backpropagation with linear neurons Suppose we replace the usual non-linear \(σ\) function with \(σ(z)=z\) throughout the network.&amp;lt;ref name=&amp;quot;ref_284021ee&amp;quot; /&amp;gt;&lt;br /&gt;
# Rewrite the backpropagation algorithm for this case.&amp;lt;ref name=&amp;quot;ref_284021ee&amp;quot; /&amp;gt;&lt;br /&gt;
# As I&amp;#039;ve described it above, the backpropagation algorithm computes the gradient of the cost function for a single training example, \(C=C_x\).&amp;lt;ref name=&amp;quot;ref_284021ee&amp;quot; /&amp;gt;&lt;br /&gt;
# What if I told you those people don’t even know what machine learning and things like backpropagation really are?&amp;lt;ref name=&amp;quot;ref_7263d7ec&amp;quot;&amp;gt;[https://www.kdnuggets.com/2019/01/backpropagation-algorithm-demystified.html The Backpropagation Algorithm Demystified]&amp;lt;/ref&amp;gt;&lt;br /&gt;
# Now that you’ve learnt some of the main principles of Backpropagation in Machine Learning you understand how it isn’t about having technology come to life so they can abolish the human race.&amp;lt;ref name=&amp;quot;ref_7263d7ec&amp;quot; /&amp;gt;&lt;br /&gt;
# Backpropagation allows us to calculate the gradient of the loss function with respect to each of the weights of the network.&amp;lt;ref name=&amp;quot;ref_1cc8ecc5&amp;quot;&amp;gt;[https://deepai.org/machine-learning-glossary-and-terms/backpropagation Backpropagation]&amp;lt;/ref&amp;gt;&lt;br /&gt;
# Backpropagation involves the calculation of the gradient proceeding backwards through the feedforward network from the last layer through to the first.&amp;lt;ref name=&amp;quot;ref_1cc8ecc5&amp;quot; /&amp;gt;&lt;br /&gt;
# The backpropagation algorithm involves first calculating the derivates at layer N, that is the last layer.&amp;lt;ref name=&amp;quot;ref_1cc8ecc5&amp;quot; /&amp;gt;&lt;br /&gt;
# Initially, the network was trained using backpropagation through all the 18 layers.&amp;lt;ref name=&amp;quot;ref_1cc8ecc5&amp;quot; /&amp;gt;&lt;br /&gt;
# Backpropagation (backward propagation) is an important mathematical tool for improving the accuracy of predictions in data mining and machine learning.&amp;lt;ref name=&amp;quot;ref_ecd1ed02&amp;quot;&amp;gt;[https://searchenterpriseai.techtarget.com/definition/backpropagation-algorithm What is backpropagation algorithm?]&amp;lt;/ref&amp;gt;&lt;br /&gt;
# Artificial neural networks use backpropagation as a learning algorithm to compute a gradient descent with respect to weights.&amp;lt;ref name=&amp;quot;ref_ecd1ed02&amp;quot; /&amp;gt;&lt;br /&gt;
# Because backpropagation requires a known, desired output for each input value in order to calculate the loss function gradient, it is usually classified as a type of supervised machine learning.&amp;lt;ref name=&amp;quot;ref_ecd1ed02&amp;quot; /&amp;gt;&lt;br /&gt;
# Professor Geoffrey Hinton explains backpropagation.&amp;lt;ref name=&amp;quot;ref_ecd1ed02&amp;quot; /&amp;gt;&lt;br /&gt;
# We can define the backpropagation algorithm as an algorithm that trains some given feed-forward Neural Network for a given input pattern where the classifications are known to us.&amp;lt;ref name=&amp;quot;ref_96cd4b8d&amp;quot;&amp;gt;[https://www.mygreatlearning.com/blog/backpropagation-algorithm/ An Introduction to Backpropagation Algorithm and How it Works?]&amp;lt;/ref&amp;gt;&lt;br /&gt;
# Before we deep dive into backpropagation, we should be aware about who introduced this concept and when.&amp;lt;ref name=&amp;quot;ref_96cd4b8d&amp;quot; /&amp;gt;&lt;br /&gt;
# Today, backpropagation is doing good.&amp;lt;ref name=&amp;quot;ref_96cd4b8d&amp;quot; /&amp;gt;&lt;br /&gt;
# Neural network training happens through backpropagation.&amp;lt;ref name=&amp;quot;ref_96cd4b8d&amp;quot; /&amp;gt;&lt;br /&gt;
# Computing Gradients (Part 2)&amp;quot; we will go over the actual backpropagation and see step by step how does the math work.&amp;lt;ref name=&amp;quot;ref_ddc2607d&amp;quot;&amp;gt;[https://www.linkedin.com/pulse/understanding-backpropagation-algorithm-introducing-math-kostadinov/ Understanding Backpropagation algorithm: Introducing the math behind neural networks (Part 1)]&amp;lt;/ref&amp;gt;&lt;br /&gt;
# Backpropagation is the central mechanism by which neural networks learn.&amp;lt;ref name=&amp;quot;ref_9a0078e6&amp;quot;&amp;gt;[https://wiki.pathmind.com/backpropagation A Beginner&amp;#039;s Guide to Backpropagation in Neural Networks]&amp;lt;/ref&amp;gt;&lt;br /&gt;
# Backpropagation takes the error associated with a wrong guess by a neural network, and uses that error to adjust the neural network’s parameters in the direction of less error.&amp;lt;ref name=&amp;quot;ref_9a0078e6&amp;quot; /&amp;gt;&lt;br /&gt;
# Backpropagation works by approximating the non-linear relationship between the input and the output by adjusting the weight values internally.&amp;lt;ref name=&amp;quot;ref_4c6214e7&amp;quot;&amp;gt;[https://www.cse.unsw.edu.au/~cs9417ml/MLP2/BackPropagation.html Mutli-Layer Perceptron]&amp;lt;/ref&amp;gt;&lt;br /&gt;
# The following figure shows the topology of the Backpropagation neural network that includes and input layer, one hidden layer and an output layer.&amp;lt;ref name=&amp;quot;ref_4c6214e7&amp;quot; /&amp;gt;&lt;br /&gt;
# The operations of the Backpropagation neural networks can be divided into two steps: feedforward and Backpropagation.&amp;lt;ref name=&amp;quot;ref_4c6214e7&amp;quot; /&amp;gt;&lt;br /&gt;
# Some modifications to the Backpropagation algorithm allows the learning rate to decrease from a large value during the learning process.&amp;lt;ref name=&amp;quot;ref_4c6214e7&amp;quot; /&amp;gt;&lt;br /&gt;
# The backpropagation algorithm — the process of training a neural network — was a glaring one for both of us in particular.&amp;lt;ref name=&amp;quot;ref_b97f023b&amp;quot;&amp;gt;[https://ayearofai.com/rohan-lenny-1-neural-networks-the-backpropagation-algorithm-explained-abf4609d4f9d Rohan &amp;amp; Lenny #1: Neural Networks &amp;amp; The Backpropagation Algorithm, Explained]&amp;lt;/ref&amp;gt;&lt;br /&gt;
# Together, we embarked on mastering backprop through some great online lectures from professors at MIT &amp;amp; Stanford.&amp;lt;ref name=&amp;quot;ref_b97f023b&amp;quot; /&amp;gt;&lt;br /&gt;
# Today, we’ll do our best to explain backpropagation and neural networks from the beginning.&amp;lt;ref name=&amp;quot;ref_b97f023b&amp;quot; /&amp;gt;&lt;br /&gt;
# The backpropagation algorithm was a major milestone in machine learning because, before it was discovered, optimization methods were extremely unsatisfactory.&amp;lt;ref name=&amp;quot;ref_b97f023b&amp;quot; /&amp;gt;&lt;br /&gt;
# In the next section, I&amp;#039;ll introduce a way to visualize the process we&amp;#039;ve just developed in addition to presenting an end-to-end method for implementing backpropagation.&amp;lt;ref name=&amp;quot;ref_2474e33d&amp;quot;&amp;gt;[https://www.jeremyjordan.me/neural-networks-training/ Neural networks: training with backpropagation.]&amp;lt;/ref&amp;gt;&lt;br /&gt;
# Note: Backpropagation is simply a method for calculating the partial derivative of the cost function with respect to all of the parameters.&amp;lt;ref name=&amp;quot;ref_2474e33d&amp;quot; /&amp;gt;&lt;br /&gt;
# The first way to do backpropagation is to backpropagate through a non linear function.&amp;lt;ref name=&amp;quot;ref_b71a2972&amp;quot;&amp;gt;[https://atcold.github.io/pytorch-Deep-Learning/en/week02/02-1/ Introduction to Gradient Descent and Backpropagation Algorithm · Deep Learning]&amp;lt;/ref&amp;gt;&lt;br /&gt;
# For the backprop algorithm, we need two sets of gradients - one with respect to the states (each module of the network) and one with respect to the weights (all the parameters in a particular module).&amp;lt;ref name=&amp;quot;ref_b71a2972&amp;quot; /&amp;gt;&lt;br /&gt;
# We can again use chain rule for backprop.&amp;lt;ref name=&amp;quot;ref_b71a2972&amp;quot; /&amp;gt;&lt;br /&gt;
# The proposed algorithms improve the backpropagation training in terms of both convergence rate and convergence characteristics, such as stable learning and robustness to oscillations.&amp;lt;ref name=&amp;quot;ref_c183aa6d&amp;quot;&amp;gt;[https://www.mitpressjournals.org/doi/10.1162/089976699300016223 Improving the Convergence of the Backpropagation Algorithm Using Learning Rate Adaptation Methods]&amp;lt;/ref&amp;gt;&lt;br /&gt;
# In algorithmic modification, the Standard BP algorithm(SBP) was modified by introducing momentum term, Quasi Newton method as a second order method, and Resillent backpropation algorithm.&amp;lt;ref name=&amp;quot;ref_8f512bcc&amp;quot;&amp;gt;[https://content.iospress.com/articles/journal-of-intelligent-and-fuzzy-systems/ifs190063 An improved third term backpropagation algorithm – inertia expanded chebyshev orthogonal polynomial]&amp;lt;/ref&amp;gt;&lt;br /&gt;
# The proposed method shows better performance when compared to standard backpropagation algorithm (SBP) and backpropagation algorithm with momentum (SBPM).&amp;lt;ref name=&amp;quot;ref_8f512bcc&amp;quot; /&amp;gt;&lt;br /&gt;
# Specifically, explanation of the backpropagation algorithm was skipped.&amp;lt;ref name=&amp;quot;ref_e5fc12a3&amp;quot;&amp;gt;[https://rubikscode.net/2018/01/22/backpropagation-algorithm-in-artificial-neural-networks/ Backpropagation Algorithm in Artificial Neural Networks]&amp;lt;/ref&amp;gt;&lt;br /&gt;
# Like the majority of important aspects of Neural Networks, we can find roots of backpropagation in the 70s of the last century.&amp;lt;ref name=&amp;quot;ref_e5fc12a3&amp;quot; /&amp;gt;&lt;br /&gt;
# One of the main tasks of backpropagation is to give us information on how quickly the error changes when weights are changed.&amp;lt;ref name=&amp;quot;ref_e5fc12a3&amp;quot; /&amp;gt;&lt;br /&gt;
# As mentioned, there are some assumptions that we need to make regarding this function in order for backpropagation to be applicable.&amp;lt;ref name=&amp;quot;ref_e5fc12a3&amp;quot; /&amp;gt;&lt;br /&gt;
# Refer to the figure 2.12 that illustrates the backpropagation multilayer network with layers.&amp;lt;ref name=&amp;quot;ref_57e74518&amp;quot;&amp;gt;[http://wwwold.ece.utep.edu/research/webfuzzy/docs/kk-thesis/kk-thesis-html/node22.html 2.4.4 Backpropagation Learning Algorithm]&amp;lt;/ref&amp;gt;&lt;br /&gt;
===소스===&lt;br /&gt;
 &amp;lt;references /&amp;gt;&lt;/div&gt;</summary>
		<author><name>Pythagoras0</name></author>
	</entry>
</feed>