<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="ko">
	<id>https://wiki.mathnt.net/index.php?action=history&amp;feed=atom&amp;title=%ED%8D%BC%EC%85%89%ED%8A%B8%EB%A1%A0</id>
	<title>퍼셉트론 - 편집 역사</title>
	<link rel="self" type="application/atom+xml" href="https://wiki.mathnt.net/index.php?action=history&amp;feed=atom&amp;title=%ED%8D%BC%EC%85%89%ED%8A%B8%EB%A1%A0"/>
	<link rel="alternate" type="text/html" href="https://wiki.mathnt.net/index.php?title=%ED%8D%BC%EC%85%89%ED%8A%B8%EB%A1%A0&amp;action=history"/>
	<updated>2026-04-05T09:51:19Z</updated>
	<subtitle>이 문서의 편집 역사</subtitle>
	<generator>MediaWiki 1.35.0</generator>
	<entry>
		<id>https://wiki.mathnt.net/index.php?title=%ED%8D%BC%EC%85%89%ED%8A%B8%EB%A1%A0&amp;diff=51144&amp;oldid=prev</id>
		<title>2021년 2월 17일 (수) 07:53에 Pythagoras0님의 편집</title>
		<link rel="alternate" type="text/html" href="https://wiki.mathnt.net/index.php?title=%ED%8D%BC%EC%85%89%ED%8A%B8%EB%A1%A0&amp;diff=51144&amp;oldid=prev"/>
		<updated>2021-02-17T07:53:47Z</updated>

		<summary type="html">&lt;p&gt;&lt;/p&gt;
&lt;table class=&quot;diff diff-contentalign-left diff-editfont-monospace&quot; data-mw=&quot;interface&quot;&gt;
				&lt;col class=&quot;diff-marker&quot; /&gt;
				&lt;col class=&quot;diff-content&quot; /&gt;
				&lt;col class=&quot;diff-marker&quot; /&gt;
				&lt;col class=&quot;diff-content&quot; /&gt;
				&lt;tr class=&quot;diff-title&quot; lang=&quot;ko&quot;&gt;
				&lt;td colspan=&quot;2&quot; style=&quot;background-color: #fff; color: #202122; text-align: center;&quot;&gt;← 이전 판&lt;/td&gt;
				&lt;td colspan=&quot;2&quot; style=&quot;background-color: #fff; color: #202122; text-align: center;&quot;&gt;2021년 2월 17일 (수) 07:53 판&lt;/td&gt;
				&lt;/tr&gt;&lt;tr&gt;&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot; id=&quot;mw-diff-left-l112&quot; &gt;112번째 줄:&lt;/td&gt;
&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot;&gt;112번째 줄:&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&#039;diff-marker&#039;&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;  &amp;lt;references /&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;td class=&#039;diff-marker&#039;&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;  &amp;lt;references /&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&#039;diff-marker&#039;&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;td class=&#039;diff-marker&#039;&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&#039;diff-marker&#039;&gt;−&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;== 메타데이터 ==&lt;/div&gt;&lt;/td&gt;&lt;td class=&#039;diff-marker&#039;&gt;+&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;==메타데이터==&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&#039;diff-marker&#039;&gt;−&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt; &lt;/div&gt;&lt;/td&gt;&lt;td colspan=&quot;2&quot;&gt; &lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&#039;diff-marker&#039;&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;===위키데이터===&lt;/div&gt;&lt;/td&gt;&lt;td class=&#039;diff-marker&#039;&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;===위키데이터===&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&#039;diff-marker&#039;&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;* ID :  [https://www.wikidata.org/wiki/Q690207 Q690207]&lt;/div&gt;&lt;/td&gt;&lt;td class=&#039;diff-marker&#039;&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;* ID :  [https://www.wikidata.org/wiki/Q690207 Q690207]&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td colspan=&quot;2&quot;&gt; &lt;/td&gt;&lt;td class=&#039;diff-marker&#039;&gt;+&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;===Spacy 패턴 목록===&lt;/ins&gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td colspan=&quot;2&quot;&gt; &lt;/td&gt;&lt;td class=&#039;diff-marker&#039;&gt;+&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;* [{&amp;#039;LEMMA&amp;#039;: &amp;#039;perceptron&amp;#039;}]&lt;/ins&gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;/table&gt;</summary>
		<author><name>Pythagoras0</name></author>
	</entry>
	<entry>
		<id>https://wiki.mathnt.net/index.php?title=%ED%8D%BC%EC%85%89%ED%8A%B8%EB%A1%A0&amp;diff=46940&amp;oldid=prev</id>
		<title>Pythagoras0: /* 메타데이터 */ 새 문단</title>
		<link rel="alternate" type="text/html" href="https://wiki.mathnt.net/index.php?title=%ED%8D%BC%EC%85%89%ED%8A%B8%EB%A1%A0&amp;diff=46940&amp;oldid=prev"/>
		<updated>2020-12-26T12:10:16Z</updated>

		<summary type="html">&lt;p&gt;&lt;span dir=&quot;auto&quot;&gt;&lt;span class=&quot;autocomment&quot;&gt;메타데이터: &lt;/span&gt; 새 문단&lt;/span&gt;&lt;/p&gt;
&lt;table class=&quot;diff diff-contentalign-left diff-editfont-monospace&quot; data-mw=&quot;interface&quot;&gt;
				&lt;col class=&quot;diff-marker&quot; /&gt;
				&lt;col class=&quot;diff-content&quot; /&gt;
				&lt;col class=&quot;diff-marker&quot; /&gt;
				&lt;col class=&quot;diff-content&quot; /&gt;
				&lt;tr class=&quot;diff-title&quot; lang=&quot;ko&quot;&gt;
				&lt;td colspan=&quot;2&quot; style=&quot;background-color: #fff; color: #202122; text-align: center;&quot;&gt;← 이전 판&lt;/td&gt;
				&lt;td colspan=&quot;2&quot; style=&quot;background-color: #fff; color: #202122; text-align: center;&quot;&gt;2020년 12월 26일 (토) 12:10 판&lt;/td&gt;
				&lt;/tr&gt;&lt;tr&gt;&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot; id=&quot;mw-diff-left-l111&quot; &gt;111번째 줄:&lt;/td&gt;
&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot;&gt;111번째 줄:&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&#039;diff-marker&#039;&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;===소스===&lt;/div&gt;&lt;/td&gt;&lt;td class=&#039;diff-marker&#039;&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;===소스===&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&#039;diff-marker&#039;&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;  &amp;lt;references /&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;td class=&#039;diff-marker&#039;&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;  &amp;lt;references /&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td colspan=&quot;2&quot;&gt; &lt;/td&gt;&lt;td class=&#039;diff-marker&#039;&gt;+&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;&lt;/ins&gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td colspan=&quot;2&quot;&gt; &lt;/td&gt;&lt;td class=&#039;diff-marker&#039;&gt;+&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;== 메타데이터 ==&lt;/ins&gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td colspan=&quot;2&quot;&gt; &lt;/td&gt;&lt;td class=&#039;diff-marker&#039;&gt;+&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;&lt;/ins&gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td colspan=&quot;2&quot;&gt; &lt;/td&gt;&lt;td class=&#039;diff-marker&#039;&gt;+&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;===위키데이터===&lt;/ins&gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td colspan=&quot;2&quot;&gt; &lt;/td&gt;&lt;td class=&#039;diff-marker&#039;&gt;+&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;* ID :  [https://www.wikidata.org/wiki/Q690207 Q690207]&lt;/ins&gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;/table&gt;</summary>
		<author><name>Pythagoras0</name></author>
	</entry>
	<entry>
		<id>https://wiki.mathnt.net/index.php?title=%ED%8D%BC%EC%85%89%ED%8A%B8%EB%A1%A0&amp;diff=46407&amp;oldid=prev</id>
		<title>Pythagoras0: /* 말뭉치 */</title>
		<link rel="alternate" type="text/html" href="https://wiki.mathnt.net/index.php?title=%ED%8D%BC%EC%85%89%ED%8A%B8%EB%A1%A0&amp;diff=46407&amp;oldid=prev"/>
		<updated>2020-12-21T16:34:36Z</updated>

		<summary type="html">&lt;p&gt;&lt;span dir=&quot;auto&quot;&gt;&lt;span class=&quot;autocomment&quot;&gt;말뭉치&lt;/span&gt;&lt;/span&gt;&lt;/p&gt;
&lt;table class=&quot;diff diff-contentalign-left diff-editfont-monospace&quot; data-mw=&quot;interface&quot;&gt;
				&lt;col class=&quot;diff-marker&quot; /&gt;
				&lt;col class=&quot;diff-content&quot; /&gt;
				&lt;col class=&quot;diff-marker&quot; /&gt;
				&lt;col class=&quot;diff-content&quot; /&gt;
				&lt;tr class=&quot;diff-title&quot; lang=&quot;ko&quot;&gt;
				&lt;td colspan=&quot;2&quot; style=&quot;background-color: #fff; color: #202122; text-align: center;&quot;&gt;← 이전 판&lt;/td&gt;
				&lt;td colspan=&quot;2&quot; style=&quot;background-color: #fff; color: #202122; text-align: center;&quot;&gt;2020년 12월 21일 (월) 16:34 판&lt;/td&gt;
				&lt;/tr&gt;&lt;tr&gt;&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot; id=&quot;mw-diff-left-l9&quot; &gt;9번째 줄:&lt;/td&gt;
&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot;&gt;9번째 줄:&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&#039;diff-marker&#039;&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;# In this post you will get a crash course in the terminology and processes used in the field of multi-layer perceptron artificial neural networks.&amp;lt;ref name=&amp;quot;ref_e539cb9b&amp;quot;&amp;gt;[https://machinelearningmastery.com/neural-networks-crash-course/ Crash Course On Multi-Layer Perceptron Neural Networks]&amp;lt;/ref&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;td class=&#039;diff-marker&#039;&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;# In this post you will get a crash course in the terminology and processes used in the field of multi-layer perceptron artificial neural networks.&amp;lt;ref name=&amp;quot;ref_e539cb9b&amp;quot;&amp;gt;[https://machinelearningmastery.com/neural-networks-crash-course/ Crash Course On Multi-Layer Perceptron Neural Networks]&amp;lt;/ref&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&#039;diff-marker&#039;&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;# The Neural Networks work the same way as the perceptron.&amp;lt;ref name=&amp;quot;ref_3761582f&amp;quot;&amp;gt;[https://towardsdatascience.com/what-the-hell-is-perceptron-626217814f53 What the Hell is Perceptron?]&amp;lt;/ref&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;td class=&#039;diff-marker&#039;&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;# The Neural Networks work the same way as the perceptron.&amp;lt;ref name=&amp;quot;ref_3761582f&amp;quot;&amp;gt;[https://towardsdatascience.com/what-the-hell-is-perceptron-626217814f53 What the Hell is Perceptron?]&amp;lt;/ref&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&#039;diff-marker&#039;&gt;−&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&lt;del style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;# Perceptron is a leading global provider of 3D automated measurement solutions and coordinate measuring machines with 38 years of experience.&amp;lt;ref name=&amp;quot;ref_c1b644d2&amp;quot;&amp;gt;[https://perceptron.com/ Perceptron]&amp;lt;/ref&amp;gt;&lt;/del&gt;&lt;/div&gt;&lt;/td&gt;&lt;td colspan=&quot;2&quot;&gt; &lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&#039;diff-marker&#039;&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;# This network was based on a unit called the perceptron.&amp;lt;ref name=&amp;quot;ref_4c1006d1&amp;quot;&amp;gt;[https://www.sciencedirect.com/topics/mathematics/perceptron Perceptron - an overview]&amp;lt;/ref&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;td class=&#039;diff-marker&#039;&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;# This network was based on a unit called the perceptron.&amp;lt;ref name=&amp;quot;ref_4c1006d1&amp;quot;&amp;gt;[https://www.sciencedirect.com/topics/mathematics/perceptron Perceptron - an overview]&amp;lt;/ref&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&#039;diff-marker&#039;&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;# A single-layer perceptron was found to be useful in classifying a continuous-valued set of inputs into one of two classes.&amp;lt;ref name=&amp;quot;ref_4c1006d1&amp;quot; /&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;td class=&#039;diff-marker&#039;&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;# A single-layer perceptron was found to be useful in classifying a continuous-valued set of inputs into one of two classes.&amp;lt;ref name=&amp;quot;ref_4c1006d1&amp;quot; /&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot; id=&quot;mw-diff-left-l109&quot; &gt;109번째 줄:&lt;/td&gt;
&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot;&gt;108번째 줄:&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&#039;diff-marker&#039;&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;# The version of Perceptron we use nowadays was introduced by Minsky and Papert in 1969.&amp;lt;ref name=&amp;quot;ref_cf0bb8b3&amp;quot; /&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;td class=&#039;diff-marker&#039;&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;# The version of Perceptron we use nowadays was introduced by Minsky and Papert in 1969.&amp;lt;ref name=&amp;quot;ref_cf0bb8b3&amp;quot; /&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&#039;diff-marker&#039;&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;# The perceptron “learns” how to adapt the weights using backpropagation.&amp;lt;ref name=&amp;quot;ref_cf0bb8b3&amp;quot; /&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;td class=&#039;diff-marker&#039;&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;# The perceptron “learns” how to adapt the weights using backpropagation.&amp;lt;ref name=&amp;quot;ref_cf0bb8b3&amp;quot; /&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td colspan=&quot;2&quot;&gt; &lt;/td&gt;&lt;td class=&#039;diff-marker&#039;&gt;+&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;&lt;/ins&gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&#039;diff-marker&#039;&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;===소스===&lt;/div&gt;&lt;/td&gt;&lt;td class=&#039;diff-marker&#039;&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;===소스===&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&#039;diff-marker&#039;&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;  &amp;lt;references /&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;td class=&#039;diff-marker&#039;&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;  &amp;lt;references /&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;/table&gt;</summary>
		<author><name>Pythagoras0</name></author>
	</entry>
	<entry>
		<id>https://wiki.mathnt.net/index.php?title=%ED%8D%BC%EC%85%89%ED%8A%B8%EB%A1%A0&amp;diff=46404&amp;oldid=prev</id>
		<title>Pythagoras0: /* 노트 */ 새 문단</title>
		<link rel="alternate" type="text/html" href="https://wiki.mathnt.net/index.php?title=%ED%8D%BC%EC%85%89%ED%8A%B8%EB%A1%A0&amp;diff=46404&amp;oldid=prev"/>
		<updated>2020-12-21T16:32:23Z</updated>

		<summary type="html">&lt;p&gt;&lt;span dir=&quot;auto&quot;&gt;&lt;span class=&quot;autocomment&quot;&gt;노트: &lt;/span&gt; 새 문단&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;&lt;b&gt;새 문서&lt;/b&gt;&lt;/p&gt;&lt;div&gt;== 노트 ==&lt;br /&gt;
&lt;br /&gt;
===위키데이터===&lt;br /&gt;
* ID :  [https://www.wikidata.org/wiki/Q690207 Q690207]&lt;br /&gt;
===말뭉치===&lt;br /&gt;
# A Perceptron is an algorithm used for supervised learning of binary classifiers.&amp;lt;ref name=&amp;quot;ref_9288270f&amp;quot;&amp;gt;[https://deepai.org/machine-learning-glossary-and-terms/perceptron Perceptron]&amp;lt;/ref&amp;gt;&lt;br /&gt;
# The weighted sum is then applied to the activation function, producing the perceptron&amp;#039;s output.&amp;lt;ref name=&amp;quot;ref_9288270f&amp;quot; /&amp;gt;&lt;br /&gt;
# This means the perceptron is used to classify data into two parts, hence binary.&amp;lt;ref name=&amp;quot;ref_9288270f&amp;quot; /&amp;gt;&lt;br /&gt;
# In this post you will get a crash course in the terminology and processes used in the field of multi-layer perceptron artificial neural networks.&amp;lt;ref name=&amp;quot;ref_e539cb9b&amp;quot;&amp;gt;[https://machinelearningmastery.com/neural-networks-crash-course/ Crash Course On Multi-Layer Perceptron Neural Networks]&amp;lt;/ref&amp;gt;&lt;br /&gt;
# The Neural Networks work the same way as the perceptron.&amp;lt;ref name=&amp;quot;ref_3761582f&amp;quot;&amp;gt;[https://towardsdatascience.com/what-the-hell-is-perceptron-626217814f53 What the Hell is Perceptron?]&amp;lt;/ref&amp;gt;&lt;br /&gt;
# Perceptron is a leading global provider of 3D automated measurement solutions and coordinate measuring machines with 38 years of experience.&amp;lt;ref name=&amp;quot;ref_c1b644d2&amp;quot;&amp;gt;[https://perceptron.com/ Perceptron]&amp;lt;/ref&amp;gt;&lt;br /&gt;
# This network was based on a unit called the perceptron.&amp;lt;ref name=&amp;quot;ref_4c1006d1&amp;quot;&amp;gt;[https://www.sciencedirect.com/topics/mathematics/perceptron Perceptron - an overview]&amp;lt;/ref&amp;gt;&lt;br /&gt;
# A single-layer perceptron was found to be useful in classifying a continuous-valued set of inputs into one of two classes.&amp;lt;ref name=&amp;quot;ref_4c1006d1&amp;quot; /&amp;gt;&lt;br /&gt;
# The perceptron computes a weighted sum of the inputs, subtracts a threshold, and passes one of two possible values out as the result.&amp;lt;ref name=&amp;quot;ref_4c1006d1&amp;quot; /&amp;gt;&lt;br /&gt;
# Variations on the perceptron-based ANN were further explored during the 1960s by Rosenblatt himself11 and by Bernard Widrow and Marcian Hoff,12 among others.&amp;lt;ref name=&amp;quot;ref_4c1006d1&amp;quot; /&amp;gt;&lt;br /&gt;
# Although the perceptron initially seemed promising, it was quickly proved that perceptrons could not be trained to recognise many classes of patterns.&amp;lt;ref name=&amp;quot;ref_a87dac7a&amp;quot;&amp;gt;[https://en.wikipedia.org/wiki/Perceptron Perceptron]&amp;lt;/ref&amp;gt;&lt;br /&gt;
# It is often believed (incorrectly) that they also conjectured that a similar result would hold for a multi-layer perceptron network.&amp;lt;ref name=&amp;quot;ref_a87dac7a&amp;quot; /&amp;gt;&lt;br /&gt;
# The perceptron learning algorithm does not terminate if the learning set is not linearly separable.&amp;lt;ref name=&amp;quot;ref_a87dac7a&amp;quot; /&amp;gt;&lt;br /&gt;
# The most famous example of the perceptron&amp;#039;s inability to solve problems with linearly nonseparable vectors is the Boolean exclusive-or problem.&amp;lt;ref name=&amp;quot;ref_a87dac7a&amp;quot; /&amp;gt;&lt;br /&gt;
# Perceptron was introduced by Frank Rosenblatt in 1957.&amp;lt;ref name=&amp;quot;ref_e659eeb5&amp;quot;&amp;gt;[https://www.simplilearn.com/what-is-perceptron-tutorial What is Perceptron]&amp;lt;/ref&amp;gt;&lt;br /&gt;
# The Perceptron receives multiple input signals, and if the sum of the input signals exceeds a certain threshold, it either outputs a signal or does not return an output.&amp;lt;ref name=&amp;quot;ref_e659eeb5&amp;quot; /&amp;gt;&lt;br /&gt;
# A Perceptron accepts inputs, moderates them with certain weight values, then applies the transformation function to output the final result.&amp;lt;ref name=&amp;quot;ref_e659eeb5&amp;quot; /&amp;gt;&lt;br /&gt;
# In the Perceptron Learning Rule, the predicted output is compared with the known output.&amp;lt;ref name=&amp;quot;ref_e659eeb5&amp;quot; /&amp;gt;&lt;br /&gt;
# The perceptron is a type of artificial neural network invented in 1957 by Frank Rosenblatt.&amp;lt;ref name=&amp;quot;ref_9517c55c&amp;quot;&amp;gt;[https://docs.rapidminer.com/latest/studio/operators/modeling/predictive/neural_nets/perceptron.html RapidMiner Documentation]&amp;lt;/ref&amp;gt;&lt;br /&gt;
# As you add points, the perceptron will attempt to classify them based on their color.&amp;lt;ref name=&amp;quot;ref_a36aba4c&amp;quot;&amp;gt;[https://www.cs.utexas.edu/~teammco/misc/perceptron/ Perceptron]&amp;lt;/ref&amp;gt;&lt;br /&gt;
# The line will be drawn where the perceptron believes the two classes are divided.&amp;lt;ref name=&amp;quot;ref_a36aba4c&amp;quot; /&amp;gt;&lt;br /&gt;
# Each time you add a point, the perceptron&amp;#039;s raw output value will be displayed.&amp;lt;ref name=&amp;quot;ref_a36aba4c&amp;quot; /&amp;gt;&lt;br /&gt;
# The perceptron is trained in real time with each point that is added.&amp;lt;ref name=&amp;quot;ref_a36aba4c&amp;quot; /&amp;gt;&lt;br /&gt;
# Understanding the perceptron neuron model By Roberto Lopez, Artelnics.&amp;lt;ref name=&amp;quot;ref_f0f69e3c&amp;quot;&amp;gt;[https://www.neuraldesigner.com/blog/perceptron-the-main-component-of-neural-networks Understanding the perceptron neuron model]&amp;lt;/ref&amp;gt;&lt;br /&gt;
# The most widely used neuron model is the perceptron.&amp;lt;ref name=&amp;quot;ref_f0f69e3c&amp;quot; /&amp;gt;&lt;br /&gt;
# This is the neuron model behind perceptron layers (also called dense layers), which are present in the majority of neural networks.&amp;lt;ref name=&amp;quot;ref_f0f69e3c&amp;quot; /&amp;gt;&lt;br /&gt;
# In this post, we explain the mathematics of the perceptron neuron model: Perceptron elements.&amp;lt;ref name=&amp;quot;ref_f0f69e3c&amp;quot; /&amp;gt;&lt;br /&gt;
# But skeptics insisted the perceptron was incapable of reshaping the relationship between human and machine.&amp;lt;ref name=&amp;quot;ref_bd49f10d&amp;quot;&amp;gt;[https://news.cornell.edu/stories/2019/09/professors-perceptron-paved-way-ai-60-years-too-soon Professor’s perceptron paved the way for AI – 60 years too soon]&amp;lt;/ref&amp;gt;&lt;br /&gt;
# The perceptron’s rise and fall helped usher in an era known as the “AI winter” – decades in which federal funding for artificial intelligence research dried up.&amp;lt;ref name=&amp;quot;ref_bd49f10d&amp;quot; /&amp;gt;&lt;br /&gt;
# The principles underlying the perceptron helped spark the modern artificial intelligence revolution.&amp;lt;ref name=&amp;quot;ref_bd49f10d&amp;quot; /&amp;gt;&lt;br /&gt;
# “The perceptron was the first neural network,” said Thorsten Joachims, professor in CIS, who teaches about Rosenblatt and the perceptron in his Introduction to Machine Learning course.&amp;lt;ref name=&amp;quot;ref_bd49f10d&amp;quot; /&amp;gt;&lt;br /&gt;
# The weights allow the perceptron to evaluate the relative importance of each of the outputs.&amp;lt;ref name=&amp;quot;ref_bfcf6479&amp;quot;&amp;gt;[https://missinglink.ai/guides/neural-network-concepts/perceptrons-and-multi-layer-perceptrons-the-artificial-neuron-at-the-core-of-deep-learning/ Perceptrons &amp;amp; Multi-Layer Perceptrons: the Artificial Neuron]&amp;lt;/ref&amp;gt;&lt;br /&gt;
# It makes it possible to fine-tune the numeric output of the perceptron.&amp;lt;ref name=&amp;quot;ref_bfcf6479&amp;quot; /&amp;gt;&lt;br /&gt;
# The activation function also helps the perceptron to learn, when it is part of a multilayer perceptron (MLP).&amp;lt;ref name=&amp;quot;ref_bfcf6479&amp;quot; /&amp;gt;&lt;br /&gt;
# There are numerous kinds of neural networks random forest, SVM, LDA, etc from which single and multilayer perceptron learning algorithms have an adequate place.&amp;lt;ref name=&amp;quot;ref_8cfa4066&amp;quot;&amp;gt;[https://medium.com/analytics-steps/understanding-the-perceptron-model-in-a-neural-network-2b3737ed70a2 Understanding the Perceptron Model in a Neural Network]&amp;lt;/ref&amp;gt;&lt;br /&gt;
# Talking in reference to the history of the perceptron model, it was first developed at Cornell Aeronautical Laboratory, United States, in 1957 for machine-implemented image recognition.&amp;lt;ref name=&amp;quot;ref_8cfa4066&amp;quot; /&amp;gt;&lt;br /&gt;
# A single-layer perceptron model includes a feed-forward network depends on a threshold transfer function in its model.&amp;lt;ref name=&amp;quot;ref_8cfa4066&amp;quot; /&amp;gt;&lt;br /&gt;
# A multi-layered perceptron model has a structure similar to a single-layered perceptron model with more number of hidden layers.&amp;lt;ref name=&amp;quot;ref_8cfa4066&amp;quot; /&amp;gt;&lt;br /&gt;
# While in actual neurons the dendrite receives electrical signals from the axons of other neurons, in the perceptron these electrical signals are represented as numerical values.&amp;lt;ref name=&amp;quot;ref_b7061cb4&amp;quot;&amp;gt;[https://cs.stanford.edu/people/eroberts/courses/soco/projects/neural-networks/Neuron/index.html Neural Networks]&amp;lt;/ref&amp;gt;&lt;br /&gt;
# This is also modeled in the perceptron by multiplying each input value by a value called the weight.&amp;lt;ref name=&amp;quot;ref_b7061cb4&amp;quot; /&amp;gt;&lt;br /&gt;
# They are listed in the table below: The input vector All the input values of each perceptron are collectively called the input vector of that perceptron.&amp;lt;ref name=&amp;quot;ref_b7061cb4&amp;quot; /&amp;gt;&lt;br /&gt;
# The weight vector Similarly, all the weight values of each perceptron are collectively called the weight vector of that perceptron.&amp;lt;ref name=&amp;quot;ref_b7061cb4&amp;quot; /&amp;gt;&lt;br /&gt;
# Welcome to AAC&amp;#039;s series on Perceptron neural networks.&amp;lt;ref name=&amp;quot;ref_87df2059&amp;quot;&amp;gt;[https://www.allaboutcircuits.com/technical-articles/how-to-train-a-basic-perceptron-neural-network/ How to Train a Basic Perceptron Neural Network]&amp;lt;/ref&amp;gt;&lt;br /&gt;
# A single layer perceptron (SLP) is a feed-forward network based on a threshold transfer function.&amp;lt;ref name=&amp;quot;ref_bbde5c88&amp;quot;&amp;gt;[https://saedsayad.com/artificial_neural_network_bkp.htm Perceptron]&amp;lt;/ref&amp;gt;&lt;br /&gt;
# The single layer perceptron does not have a priori knowledge, so the initial weights are assigned randomly.&amp;lt;ref name=&amp;quot;ref_bbde5c88&amp;quot; /&amp;gt;&lt;br /&gt;
# The input values are presented to the perceptron, and if the predicted output is the same as the desired output, then the performance is considered satisfactory and no changes to the weights are made.&amp;lt;ref name=&amp;quot;ref_bbde5c88&amp;quot; /&amp;gt;&lt;br /&gt;
# A multi-layer perceptron (MLP) has the same structure of a single layer perceptron with one or more hidden layers.&amp;lt;ref name=&amp;quot;ref_bbde5c88&amp;quot; /&amp;gt;&lt;br /&gt;
# Perceptron is the first neural network to be created.&amp;lt;ref name=&amp;quot;ref_76758d3f&amp;quot;&amp;gt;[https://analyticsindiamag.com/perceptron-is-the-only-neural-network-without-any-hidden-layer/ Hands-On Implementation Of Perceptron Algorithm in Python]&amp;lt;/ref&amp;gt;&lt;br /&gt;
# I’ve shown a basic implementation of the perceptron algorithm in Python to classify the flowers in the iris dataset.&amp;lt;ref name=&amp;quot;ref_76758d3f&amp;quot; /&amp;gt;&lt;br /&gt;
# Frank Rosenblatt, godfather of the perceptron, popularized it as a device rather than an algorithm.&amp;lt;ref name=&amp;quot;ref_d8ffb75a&amp;quot;&amp;gt;[https://wiki.pathmind.com/multilayer-perceptron A Beginner&amp;#039;s Guide to Multilayer Perceptrons (MLP)]&amp;lt;/ref&amp;gt;&lt;br /&gt;
# A perceptron is a linear classifier; that is, it is an algorithm that classifies input by separating two categories with a straight line.&amp;lt;ref name=&amp;quot;ref_d8ffb75a&amp;quot; /&amp;gt;&lt;br /&gt;
# Rosenblatt built a single-layer perceptron.&amp;lt;ref name=&amp;quot;ref_d8ffb75a&amp;quot; /&amp;gt;&lt;br /&gt;
# It is composed of more than one perceptron.&amp;lt;ref name=&amp;quot;ref_d8ffb75a&amp;quot; /&amp;gt;&lt;br /&gt;
# For understanding single layer perceptron, it is important to understand Artificial Neural Networks (ANN).&amp;lt;ref name=&amp;quot;ref_c93266b2&amp;quot;&amp;gt;[https://www.tutorialspoint.com/tensorflow/tensorflow_single_layer_perceptron.htm Single Layer Perceptron]&amp;lt;/ref&amp;gt;&lt;br /&gt;
# Single layer perceptron is the first proposed neural model created.&amp;lt;ref name=&amp;quot;ref_c93266b2&amp;quot; /&amp;gt;&lt;br /&gt;
# The computation of a single layer perceptron is performed over the calculation of sum of the input vector each with the value multiplied by corresponding element of vector of the weights.&amp;lt;ref name=&amp;quot;ref_c93266b2&amp;quot; /&amp;gt;&lt;br /&gt;
# Let us focus on the implementation of single layer perceptron for an image classification problem using TensorFlow.&amp;lt;ref name=&amp;quot;ref_c93266b2&amp;quot; /&amp;gt;&lt;br /&gt;
# Frank Rosenblatt, using the McCulloch-Pitts neuron and the findings of Hebb, went on to develop the first perceptron.&amp;lt;ref name=&amp;quot;ref_7721c527&amp;quot;&amp;gt;[https://web.csulb.edu/~cwallis/artificialn/History.htm History of the Perceptron]&amp;lt;/ref&amp;gt;&lt;br /&gt;
# This perceptron, which could learn in the Hebbean sense, through the weighting of inputs, was instrumental in the later formation of neural networks.&amp;lt;ref name=&amp;quot;ref_7721c527&amp;quot; /&amp;gt;&lt;br /&gt;
# He discussed the perceptron in his 1962 book, Principles of Neurodynamics.&amp;lt;ref name=&amp;quot;ref_7721c527&amp;quot; /&amp;gt;&lt;br /&gt;
# This perceptron has a total of five inputs a1 through a5 with each having a weight of w1 through w5.&amp;lt;ref name=&amp;quot;ref_7721c527&amp;quot; /&amp;gt;&lt;br /&gt;
# The perceptron learning algorithm and its multiple-layer extension, the backpropagation algorithm, are the foundations of the present-day machine learning revolution.&amp;lt;ref name=&amp;quot;ref_cc05b731&amp;quot;&amp;gt;[https://www.frontiersin.org/articles/10.3389/fncom.2020.00033/full Perceptron Learning and Classification in a Modeled Cortical Pyramidal Cell]&amp;lt;/ref&amp;gt;&lt;br /&gt;
# Here we implemented the perceptron learning algorithm in a realistic biophysical model of a layer 5 cortical pyramidal cell with a full complement of non-linear dendritic channels.&amp;lt;ref name=&amp;quot;ref_cc05b731&amp;quot; /&amp;gt;&lt;br /&gt;
# We show that the BP performs these tasks with an accuracy comparable to that of the original perceptron, though the classification capacity of the apical tuft is somewhat limited.&amp;lt;ref name=&amp;quot;ref_cc05b731&amp;quot; /&amp;gt;&lt;br /&gt;
# The perceptron is a learning algorithm that utilizes a mathematical abstraction of a neuron which applies a threshold activation function to the weighted sum of its input (Figure 1A).&amp;lt;ref name=&amp;quot;ref_cc05b731&amp;quot; /&amp;gt;&lt;br /&gt;
# In particular, we’ll see how to combine several of them into a layer and create a neural network called the perceptron.&amp;lt;ref name=&amp;quot;ref_1c3198e6&amp;quot;&amp;gt;[https://pythonmachinelearning.pro/perceptrons-the-first-neural-networks/ Perceptrons: The First Neural Networks]&amp;lt;/ref&amp;gt;&lt;br /&gt;
# We’ll write Python code (using numpy) to build a perceptron network from scratch and implement the learning algorithm.&amp;lt;ref name=&amp;quot;ref_1c3198e6&amp;quot; /&amp;gt;&lt;br /&gt;
# Since the output of a perceptron is binary, we can use it for binary classification, i.e., an input belongs to only one of two classes.&amp;lt;ref name=&amp;quot;ref_1c3198e6&amp;quot; /&amp;gt;&lt;br /&gt;
# In order to construct our perceptron, we need to know how many inputs there are to create our weight vector.&amp;lt;ref name=&amp;quot;ref_1c3198e6&amp;quot; /&amp;gt;&lt;br /&gt;
# Perceptron is a fundamental unit of the neural network which takes weighted inputs, process it and capable of performing binary classifications.&amp;lt;ref name=&amp;quot;ref_29046212&amp;quot;&amp;gt;[https://hackernoon.com/perceptron-deep-learning-basics-3a938c5f84b6 Perceptron — Deep Learning Basics]&amp;lt;/ref&amp;gt;&lt;br /&gt;
# In this post, we will discuss the working of the Perceptron Model.&amp;lt;ref name=&amp;quot;ref_29046212&amp;quot; /&amp;gt;&lt;br /&gt;
# In 1958 Frank Rosenblatt proposed the perceptron, a more generalized computational model than the McCulloch-Pitts Neuron.&amp;lt;ref name=&amp;quot;ref_29046212&amp;quot; /&amp;gt;&lt;br /&gt;
# The important feature in the Rosenblatt proposed perceptron was the introduction of weights for the inputs.&amp;lt;ref name=&amp;quot;ref_29046212&amp;quot; /&amp;gt;&lt;br /&gt;
# The training of the perceptron consists of feeding it multiple training samples and calculating the output for each of them.&amp;lt;ref name=&amp;quot;ref_4a2cf41e&amp;quot;&amp;gt;[https://www.toptal.com/machine-learning/an-introduction-to-deep-learning-from-perceptrons-to-deep-networks A Deep Learning Tutorial: From Perceptrons to Deep Networks]&amp;lt;/ref&amp;gt;&lt;br /&gt;
# The single perceptron approach to deep learning has one major drawback: it can only learn linearly separable functions.&amp;lt;ref name=&amp;quot;ref_4a2cf41e&amp;quot; /&amp;gt;&lt;br /&gt;
# For example, to get the results from a multilayer perceptron, the data is “clamped” to the input layer (hence, this is the first layer to be calculated) and propagated all the way to the output layer.&amp;lt;ref name=&amp;quot;ref_4a2cf41e&amp;quot; /&amp;gt;&lt;br /&gt;
# The connection calculators implement a variety of transfer (e.g., weighted sum, convolutional) and activation (e.g., logistic and tanh for multilayer perceptron, binary for RBM) functions.&amp;lt;ref name=&amp;quot;ref_4a2cf41e&amp;quot; /&amp;gt;&lt;br /&gt;
# Understanding the logic behind the classical single layer perceptron will help you to understand the idea behind deep learning as well.&amp;lt;ref name=&amp;quot;ref_498d9ecc&amp;quot;&amp;gt;[https://sefiks.com/2020/01/04/a-step-by-step-perceptron-example/ A Step by Step Perceptron Example]&amp;lt;/ref&amp;gt;&lt;br /&gt;
# They both cover the perceptron from scratch.&amp;lt;ref name=&amp;quot;ref_498d9ecc&amp;quot; /&amp;gt;&lt;br /&gt;
# We will apply 1st instance to the perceptron.&amp;lt;ref name=&amp;quot;ref_498d9ecc&amp;quot; /&amp;gt;&lt;br /&gt;
# Updating weights means learning in the perceptron.&amp;lt;ref name=&amp;quot;ref_498d9ecc&amp;quot; /&amp;gt;&lt;br /&gt;
# The perceptron algorithm was designed to classify visual inputs, categorizing subjects into one of two types and separating groups with a line.&amp;lt;ref name=&amp;quot;ref_e37062e2&amp;quot;&amp;gt;[https://whatis.techtarget.com/definition/perceptron Definition from WhatIs.com]&amp;lt;/ref&amp;gt;&lt;br /&gt;
# The perceptron algorithm was developed at Cornell Aeronautical Laboratory in 1957, funded by the United States Office of Naval Research.&amp;lt;ref name=&amp;quot;ref_e37062e2&amp;quot; /&amp;gt;&lt;br /&gt;
# The machine, called Mark 1 Perceptron, was physically made up of an array of 400 photocells connected to perceptrons whose weights were recorded in potentiometers, as adjusted by electric motors.&amp;lt;ref name=&amp;quot;ref_e37062e2&amp;quot; /&amp;gt;&lt;br /&gt;
# At the time, the perceptron was expected to be very significant for the development of artificial intelligence (AI).&amp;lt;ref name=&amp;quot;ref_e37062e2&amp;quot; /&amp;gt;&lt;br /&gt;
# The perceptron algorithm is frequently used in supervised learning, which is a machine learning task that has the advantage of being trained on labeled data.&amp;lt;ref name=&amp;quot;ref_56b33c41&amp;quot;&amp;gt;[https://brilliant.org/wiki/perceptron/ Brilliant Math &amp;amp; Science Wiki]&amp;lt;/ref&amp;gt;&lt;br /&gt;
# Specifically, the perceptron algorithm focuses on binary classified data, objects that are either members of one class or another.&amp;lt;ref name=&amp;quot;ref_56b33c41&amp;quot; /&amp;gt;&lt;br /&gt;
# Furthermore, the perceptron algorithm is a type of linear classifier, which classifies data points by using a linear combination of the variables used.&amp;lt;ref name=&amp;quot;ref_56b33c41&amp;quot; /&amp;gt;&lt;br /&gt;
# An interesting consequence of the perceptron&amp;#039;s properties is that it is unable to learn an XOR function!&amp;lt;ref name=&amp;quot;ref_56b33c41&amp;quot; /&amp;gt;&lt;br /&gt;
# Think of a perceptron as a node of a vast, interconnected network, sort of like a binary tree , although the network does not necessarily have to have a top and bottom.&amp;lt;ref name=&amp;quot;ref_9c45321c&amp;quot;&amp;gt;[https://www.cprogramming.com/tutorial/AI/perceptron.html Understanding and Using Perceptrons]&amp;lt;/ref&amp;gt;&lt;br /&gt;
# Since linking perceptrons into a network is a bit complicated, let&amp;#039;s take a perceptron by itself.&amp;lt;ref name=&amp;quot;ref_9c45321c&amp;quot; /&amp;gt;&lt;br /&gt;
# A perceptron has a number of external input links, one internal input (called a bias), a threshold, and one output link.&amp;lt;ref name=&amp;quot;ref_9c45321c&amp;quot; /&amp;gt;&lt;br /&gt;
# To the right, you can see a picture of a simple perceptron.&amp;lt;ref name=&amp;quot;ref_9c45321c&amp;quot; /&amp;gt;&lt;br /&gt;
# In Machine learning, the Perceptron Learning Algorithm is the supervised learning algorithm which has binary classes.&amp;lt;ref name=&amp;quot;ref_b603a881&amp;quot;&amp;gt;[https://www.mygreatlearning.com/blog/perceptron-learning-algorithm/ Perceptron Learning Algorithm: How to Implement Linearly Separable Functions]&amp;lt;/ref&amp;gt;&lt;br /&gt;
# The perceptron learning algorithm is the simplest model of a neuron that illustrates how a neural network works.&amp;lt;ref name=&amp;quot;ref_b603a881&amp;quot; /&amp;gt;&lt;br /&gt;
# How the perceptron learning algorithm functions are represented in the above figure.&amp;lt;ref name=&amp;quot;ref_b603a881&amp;quot; /&amp;gt;&lt;br /&gt;
# Moreover, the hypothetical investigation of the normal mistake of the perceptron calculation yields fundamentally the same as limits to those of help vector machines.&amp;lt;ref name=&amp;quot;ref_b603a881&amp;quot; /&amp;gt;&lt;br /&gt;
# Two 20 × 20 crossbar circuits were packaged and integrated with discrete CMOS components on two printed circuit boards (Supplementary Fig. 2b) to implement the multilayer perceptron (MLP) (Fig. 4).&amp;lt;ref name=&amp;quot;ref_aa327118&amp;quot;&amp;gt;[https://www.nature.com/articles/s41467-018-04482-4 Implementation of multilayer perceptron network with highly uniform passive memristive crossbar circuits]&amp;lt;/ref&amp;gt;&lt;br /&gt;
# a A perceptron diagram showing portions of the crossbar circuits involved in the experiment.&amp;lt;ref name=&amp;quot;ref_aa327118&amp;quot; /&amp;gt;&lt;br /&gt;
# b Graph representation of the implemented network; c Equivalent circuit for the first layer of the perceptron.&amp;lt;ref name=&amp;quot;ref_aa327118&amp;quot; /&amp;gt;&lt;br /&gt;
# Concurrently, the measurement of output voltages of the perceptron network is carried out.&amp;lt;ref name=&amp;quot;ref_aa327118&amp;quot; /&amp;gt;&lt;br /&gt;
# A perceptron can simply be seen as a set of inputs, that are weighted and to which we apply an activation function.&amp;lt;ref name=&amp;quot;ref_cf0bb8b3&amp;quot;&amp;gt;[https://maelfabien.github.io/deeplearning/Perceptron/ The Rosenblatt’s Perceptron]&amp;lt;/ref&amp;gt;&lt;br /&gt;
# The perceptron was first introduced in 1957 by Franck Rosenblatt.&amp;lt;ref name=&amp;quot;ref_cf0bb8b3&amp;quot; /&amp;gt;&lt;br /&gt;
# The version of Perceptron we use nowadays was introduced by Minsky and Papert in 1969.&amp;lt;ref name=&amp;quot;ref_cf0bb8b3&amp;quot; /&amp;gt;&lt;br /&gt;
# The perceptron “learns” how to adapt the weights using backpropagation.&amp;lt;ref name=&amp;quot;ref_cf0bb8b3&amp;quot; /&amp;gt;&lt;br /&gt;
===소스===&lt;br /&gt;
 &amp;lt;references /&amp;gt;&lt;/div&gt;</summary>
		<author><name>Pythagoras0</name></author>
	</entry>
</feed>