<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="ko">
	<id>https://wiki.mathnt.net/index.php?action=history&amp;feed=atom&amp;title=Inductive_transfer</id>
	<title>Inductive transfer - 편집 역사</title>
	<link rel="self" type="application/atom+xml" href="https://wiki.mathnt.net/index.php?action=history&amp;feed=atom&amp;title=Inductive_transfer"/>
	<link rel="alternate" type="text/html" href="https://wiki.mathnt.net/index.php?title=Inductive_transfer&amp;action=history"/>
	<updated>2026-04-10T11:06:38Z</updated>
	<subtitle>이 문서의 편집 역사</subtitle>
	<generator>MediaWiki 1.35.0</generator>
	<entry>
		<id>https://wiki.mathnt.net/index.php?title=Inductive_transfer&amp;diff=51279&amp;oldid=prev</id>
		<title>2021년 2월 17일 (수) 08:11에 Pythagoras0님의 편집</title>
		<link rel="alternate" type="text/html" href="https://wiki.mathnt.net/index.php?title=Inductive_transfer&amp;diff=51279&amp;oldid=prev"/>
		<updated>2021-02-17T08:11:18Z</updated>

		<summary type="html">&lt;p&gt;&lt;/p&gt;
&lt;table class=&quot;diff diff-contentalign-left diff-editfont-monospace&quot; data-mw=&quot;interface&quot;&gt;
				&lt;col class=&quot;diff-marker&quot; /&gt;
				&lt;col class=&quot;diff-content&quot; /&gt;
				&lt;col class=&quot;diff-marker&quot; /&gt;
				&lt;col class=&quot;diff-content&quot; /&gt;
				&lt;tr class=&quot;diff-title&quot; lang=&quot;ko&quot;&gt;
				&lt;td colspan=&quot;2&quot; style=&quot;background-color: #fff; color: #202122; text-align: center;&quot;&gt;← 이전 판&lt;/td&gt;
				&lt;td colspan=&quot;2&quot; style=&quot;background-color: #fff; color: #202122; text-align: center;&quot;&gt;2021년 2월 17일 (수) 08:11 판&lt;/td&gt;
				&lt;/tr&gt;&lt;tr&gt;&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot; id=&quot;mw-diff-left-l113&quot; &gt;113번째 줄:&lt;/td&gt;
&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot;&gt;113번째 줄:&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&#039;diff-marker&#039;&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;  &amp;lt;references /&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;td class=&#039;diff-marker&#039;&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;  &amp;lt;references /&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&#039;diff-marker&#039;&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;td class=&#039;diff-marker&#039;&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&#039;diff-marker&#039;&gt;−&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;== 메타데이터 ==&lt;/div&gt;&lt;/td&gt;&lt;td class=&#039;diff-marker&#039;&gt;+&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;==메타데이터==&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&#039;diff-marker&#039;&gt;−&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt; &lt;/div&gt;&lt;/td&gt;&lt;td colspan=&quot;2&quot;&gt; &lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&#039;diff-marker&#039;&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;===위키데이터===&lt;/div&gt;&lt;/td&gt;&lt;td class=&#039;diff-marker&#039;&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;===위키데이터===&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&#039;diff-marker&#039;&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;* ID :  [https://www.wikidata.org/wiki/Q6027324 Q6027324]&lt;/div&gt;&lt;/td&gt;&lt;td class=&#039;diff-marker&#039;&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;* ID :  [https://www.wikidata.org/wiki/Q6027324 Q6027324]&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td colspan=&quot;2&quot;&gt; &lt;/td&gt;&lt;td class=&#039;diff-marker&#039;&gt;+&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;===Spacy 패턴 목록===&lt;/ins&gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td colspan=&quot;2&quot;&gt; &lt;/td&gt;&lt;td class=&#039;diff-marker&#039;&gt;+&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;* [{&amp;#039;LOWER&amp;#039;: &amp;#039;transfer&amp;#039;}, {&amp;#039;LEMMA&amp;#039;: &amp;#039;learning&amp;#039;}]&lt;/ins&gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td colspan=&quot;2&quot;&gt; &lt;/td&gt;&lt;td class=&#039;diff-marker&#039;&gt;+&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;* [{&amp;#039;LOWER&amp;#039;: &amp;#039;inductive&amp;#039;}, {&amp;#039;LEMMA&amp;#039;: &amp;#039;transfer&amp;#039;}]&lt;/ins&gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;/table&gt;</summary>
		<author><name>Pythagoras0</name></author>
	</entry>
	<entry>
		<id>https://wiki.mathnt.net/index.php?title=Inductive_transfer&amp;diff=47076&amp;oldid=prev</id>
		<title>Pythagoras0: /* 메타데이터 */ 새 문단</title>
		<link rel="alternate" type="text/html" href="https://wiki.mathnt.net/index.php?title=Inductive_transfer&amp;diff=47076&amp;oldid=prev"/>
		<updated>2020-12-26T12:19:14Z</updated>

		<summary type="html">&lt;p&gt;&lt;span dir=&quot;auto&quot;&gt;&lt;span class=&quot;autocomment&quot;&gt;메타데이터: &lt;/span&gt; 새 문단&lt;/span&gt;&lt;/p&gt;
&lt;table class=&quot;diff diff-contentalign-left diff-editfont-monospace&quot; data-mw=&quot;interface&quot;&gt;
				&lt;col class=&quot;diff-marker&quot; /&gt;
				&lt;col class=&quot;diff-content&quot; /&gt;
				&lt;col class=&quot;diff-marker&quot; /&gt;
				&lt;col class=&quot;diff-content&quot; /&gt;
				&lt;tr class=&quot;diff-title&quot; lang=&quot;ko&quot;&gt;
				&lt;td colspan=&quot;2&quot; style=&quot;background-color: #fff; color: #202122; text-align: center;&quot;&gt;← 이전 판&lt;/td&gt;
				&lt;td colspan=&quot;2&quot; style=&quot;background-color: #fff; color: #202122; text-align: center;&quot;&gt;2020년 12월 26일 (토) 12:19 판&lt;/td&gt;
				&lt;/tr&gt;&lt;tr&gt;&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot; id=&quot;mw-diff-left-l112&quot; &gt;112번째 줄:&lt;/td&gt;
&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot;&gt;112번째 줄:&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&#039;diff-marker&#039;&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;===소스===&lt;/div&gt;&lt;/td&gt;&lt;td class=&#039;diff-marker&#039;&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;===소스===&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&#039;diff-marker&#039;&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;  &amp;lt;references /&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;td class=&#039;diff-marker&#039;&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;  &amp;lt;references /&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td colspan=&quot;2&quot;&gt; &lt;/td&gt;&lt;td class=&#039;diff-marker&#039;&gt;+&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;&lt;/ins&gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td colspan=&quot;2&quot;&gt; &lt;/td&gt;&lt;td class=&#039;diff-marker&#039;&gt;+&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;== 메타데이터 ==&lt;/ins&gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td colspan=&quot;2&quot;&gt; &lt;/td&gt;&lt;td class=&#039;diff-marker&#039;&gt;+&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;&lt;/ins&gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td colspan=&quot;2&quot;&gt; &lt;/td&gt;&lt;td class=&#039;diff-marker&#039;&gt;+&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;===위키데이터===&lt;/ins&gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td colspan=&quot;2&quot;&gt; &lt;/td&gt;&lt;td class=&#039;diff-marker&#039;&gt;+&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;* ID :  [https://www.wikidata.org/wiki/Q6027324 Q6027324]&lt;/ins&gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;/table&gt;</summary>
		<author><name>Pythagoras0</name></author>
	</entry>
	<entry>
		<id>https://wiki.mathnt.net/index.php?title=Inductive_transfer&amp;diff=46246&amp;oldid=prev</id>
		<title>Pythagoras0: /* 노트 */ 새 문단</title>
		<link rel="alternate" type="text/html" href="https://wiki.mathnt.net/index.php?title=Inductive_transfer&amp;diff=46246&amp;oldid=prev"/>
		<updated>2020-12-21T10:12:35Z</updated>

		<summary type="html">&lt;p&gt;&lt;span dir=&quot;auto&quot;&gt;&lt;span class=&quot;autocomment&quot;&gt;노트: &lt;/span&gt; 새 문단&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;&lt;b&gt;새 문서&lt;/b&gt;&lt;/p&gt;&lt;div&gt;== 노트 ==&lt;br /&gt;
&lt;br /&gt;
===위키데이터===&lt;br /&gt;
* ID :  [https://www.wikidata.org/wiki/Q6027324 Q6027324]&lt;br /&gt;
===말뭉치===&lt;br /&gt;
# Transfer learning is a deep learning approach in which a model that has been trained for one task is used as a starting point to train a model for similar task.&amp;lt;ref name=&amp;quot;ref_d303c383&amp;quot;&amp;gt;[https://kr.mathworks.com/discovery/transfer-learning.html Transfer Learning]&amp;lt;/ref&amp;gt;&lt;br /&gt;
# Fine-tuning a network with transfer learning is usually much faster and easier than training a network from scratch.&amp;lt;ref name=&amp;quot;ref_d303c383&amp;quot; /&amp;gt;&lt;br /&gt;
# The two commonly used approaches for deep learning are training a model from scratch and transfer learning.&amp;lt;ref name=&amp;quot;ref_d303c383&amp;quot; /&amp;gt;&lt;br /&gt;
# Transfer learning is useful for tasks such object recognition, for which a variety of popular pretrained models, such as AlexNet and GoogLeNet, can be used as a starting point.&amp;lt;ref name=&amp;quot;ref_d303c383&amp;quot; /&amp;gt;&lt;br /&gt;
# We&amp;#039;ll take a look at what transfer learning is, how it works, why and when you it should be used.&amp;lt;ref name=&amp;quot;ref_8c43ea17&amp;quot;&amp;gt;[https://builtin.com/data-science/transfer-learning What is transfer learning? Exploring the popular deep learning approach]&amp;lt;/ref&amp;gt;&lt;br /&gt;
# Transfer Learning Transfer learning, used in machine learning, is the reuse of a pre-trained model on a new problem.&amp;lt;ref name=&amp;quot;ref_8c43ea17&amp;quot; /&amp;gt;&lt;br /&gt;
# In transfer learning, a machine exploits the knowledge gained from a previous task to improve generalization about another.&amp;lt;ref name=&amp;quot;ref_8c43ea17&amp;quot; /&amp;gt;&lt;br /&gt;
# In transfer learning, the knowledge of an already trained machine learning model is applied to a different but related problem.&amp;lt;ref name=&amp;quot;ref_8c43ea17&amp;quot; /&amp;gt;&lt;br /&gt;
# Transfer learning is the same idea.&amp;lt;ref name=&amp;quot;ref_24aa72cf&amp;quot;&amp;gt;[https://blogs.nvidia.com/blog/2019/02/07/what-is-transfer-learning/ What Is Transfer Learning?]&amp;lt;/ref&amp;gt;&lt;br /&gt;
# Recurrent neural networks, often used in speech recognition, can take advantage of transfer learning, as well.&amp;lt;ref name=&amp;quot;ref_24aa72cf&amp;quot; /&amp;gt;&lt;br /&gt;
# In this tutorial, you will learn how to train a convolutional neural network for image classification using transfer learning.&amp;lt;ref name=&amp;quot;ref_44ec95fd&amp;quot;&amp;gt;[https://pytorch.org/tutorials/beginner/transfer_learning_tutorial.html Transfer Learning for Computer Vision Tutorial — PyTorch Tutorials 1.7.1 documentation]&amp;lt;/ref&amp;gt;&lt;br /&gt;
# Since we are using transfer learning, we should be able to generalize reasonably well.&amp;lt;ref name=&amp;quot;ref_44ec95fd&amp;quot; /&amp;gt;&lt;br /&gt;
# Transfer learning consists of taking features learned on one problem, and leveraging them on a new, similar problem.&amp;lt;ref name=&amp;quot;ref_9da0fac2&amp;quot;&amp;gt;[https://keras.io/guides/transfer_learning Transfer learning &amp;amp; fine-tuning]&amp;lt;/ref&amp;gt;&lt;br /&gt;
# Transfer learning is typically used for tasks when your new dataset has too little data to train a full-scale model from scratch, and in such scenarios data augmentation is very important.&amp;lt;ref name=&amp;quot;ref_9da0fac2&amp;quot; /&amp;gt;&lt;br /&gt;
# To solidify these concepts, let&amp;#039;s walk you through a concrete end-to-end transfer learning &amp;amp; fine-tuning example.&amp;lt;ref name=&amp;quot;ref_9da0fac2&amp;quot; /&amp;gt;&lt;br /&gt;
# One answer is transfer learning.&amp;lt;ref name=&amp;quot;ref_206c921c&amp;quot;&amp;gt;[https://owkin.com/collaborative-ai/transfer-learning/ Transfer Learning and the Rise of Collaborative Artificial Intelligence]&amp;lt;/ref&amp;gt;&lt;br /&gt;
# Transfer learning is a domain of AI.&amp;lt;ref name=&amp;quot;ref_206c921c&amp;quot; /&amp;gt;&lt;br /&gt;
# It is probably the most used story of transfer learning practice at the moment and one of the hidden reasons why deep learning is such a success.&amp;lt;ref name=&amp;quot;ref_206c921c&amp;quot; /&amp;gt;&lt;br /&gt;
# Indeed, deep learning architecture is very well suited for the transfer learning approach.&amp;lt;ref name=&amp;quot;ref_206c921c&amp;quot; /&amp;gt;&lt;br /&gt;
# This methodology is called transfer learning.&amp;lt;ref name=&amp;quot;ref_d68a7d67&amp;quot;&amp;gt;[https://datascience.aero/transfer-learning-aviation/ Is Transfer Learning the final step for enabling AI in Aviation?]&amp;lt;/ref&amp;gt;&lt;br /&gt;
# The key concept behind transfer learning in data science is deep learning models.&amp;lt;ref name=&amp;quot;ref_d68a7d67&amp;quot; /&amp;gt;&lt;br /&gt;
# In addition to being used to improve deep learning models, transfer learning is used in new methodologies for building and training machine learning models in general.&amp;lt;ref name=&amp;quot;ref_d68a7d67&amp;quot; /&amp;gt;&lt;br /&gt;
# The basic idea of transfer learning is then to start with a deep learning network that is pre-initialized from training of a similar problem.&amp;lt;ref name=&amp;quot;ref_9170cb47&amp;quot;&amp;gt;[https://developer.ibm.com/technologies/artificial-intelligence/articles/transfer-learning-for-deep-learning/ Transfer learning for deep learning]&amp;lt;/ref&amp;gt;&lt;br /&gt;
# Transfer learning is the method of starting with a pre-trained model and training it for a new — related — problem domain.&amp;lt;ref name=&amp;quot;ref_9170cb47&amp;quot; /&amp;gt;&lt;br /&gt;
# Transfer learning is an important piece of many deep learning applications now and in the future.&amp;lt;ref name=&amp;quot;ref_9170cb47&amp;quot; /&amp;gt;&lt;br /&gt;
# The key to transfer learning is the generality of features within the learning model.&amp;lt;ref name=&amp;quot;ref_9170cb47&amp;quot; /&amp;gt;&lt;br /&gt;
# Following the same approach, a term was introduced Transfer Learning in the field of machine learning.&amp;lt;ref name=&amp;quot;ref_846ed3d1&amp;quot;&amp;gt;[https://www.geeksforgeeks.org/ml-introduction-to-transfer-learning/ Introduction to Transfer Learning - GeeksforGeeks]&amp;lt;/ref&amp;gt;&lt;br /&gt;
# When dealing with transfer learning, we come across a phenomenon called freezing of layers.&amp;lt;ref name=&amp;quot;ref_846ed3d1&amp;quot; /&amp;gt;&lt;br /&gt;
# When we use transfer learning in solving a problem, we select a pre-trained model as our base model.&amp;lt;ref name=&amp;quot;ref_846ed3d1&amp;quot; /&amp;gt;&lt;br /&gt;
# Transfer learning is a very effective and fast way, to begin with, a problem.&amp;lt;ref name=&amp;quot;ref_846ed3d1&amp;quot; /&amp;gt;&lt;br /&gt;
# Transfer learning has received attention of data scientists as a methodology for taking advantage of available training data/models from related tasks and applying them to the problem in hand1.&amp;lt;ref name=&amp;quot;ref_39abc2f4&amp;quot;&amp;gt;[https://www.nature.com/articles/s41598-019-41316-9 Using a Novel Transfer Learning Method for Designing Thin Film Solar Cells with Enhanced Quantum Efficiencies]&amp;lt;/ref&amp;gt;&lt;br /&gt;
# Example of classification tasks that has benefited from transfer learning include image2,3, web document4,5, brain-computer interface6,7, music8 and emotion9 classification.&amp;lt;ref name=&amp;quot;ref_39abc2f4&amp;quot; /&amp;gt;&lt;br /&gt;
# Despite the above-mentioned applications, transfer learning in optimization problems has not been evaluated thoroughly except a few fields.&amp;lt;ref name=&amp;quot;ref_39abc2f4&amp;quot; /&amp;gt;&lt;br /&gt;
# There are reports of the use of transfer learning in automatic hyper-parameter tuning problems23,24,25,26 to increase training speed and improve prediction accuracy.&amp;lt;ref name=&amp;quot;ref_39abc2f4&amp;quot; /&amp;gt;&lt;br /&gt;
# Transfer learning is a well-established technique for training artificial neural networks (see e.g., Ref.&amp;lt;ref name=&amp;quot;ref_c9ee6701&amp;quot;&amp;gt;[https://pennylane.ai/qml/demos/tutorial_quantum_transfer_learning.html Quantum transfer learning — PennyLane]&amp;lt;/ref&amp;gt;&lt;br /&gt;
# We focus on the CQ transfer learning scheme discussed in the previous section and we give a specific example.&amp;lt;ref name=&amp;quot;ref_c9ee6701&amp;quot; /&amp;gt;&lt;br /&gt;
# This is a very small dataset (roughly 250 images), too small for training from scratch a classical or quantum model, however it is enough when using transfer learning approach.&amp;lt;ref name=&amp;quot;ref_c9ee6701&amp;quot; /&amp;gt;&lt;br /&gt;
# We follow the transfer learning approach: First load the classical pre-trained network ResNet18 from the torchvision.models zoo.&amp;lt;ref name=&amp;quot;ref_c9ee6701&amp;quot; /&amp;gt;&lt;br /&gt;
# This paper demonstrates the versatility of this type of regularizer across transfer learning scenarios.&amp;lt;ref name=&amp;quot;ref_fb2e9f9b&amp;quot;&amp;gt;[https://www.sciencedirect.com/science/article/pii/S0262885619304469 Transfer learning in computer vision tasks: Remember where you come from]&amp;lt;/ref&amp;gt;&lt;br /&gt;
# Transfer Learning has been utilized by humans since time immemorial.&amp;lt;ref name=&amp;quot;ref_9c61f805&amp;quot;&amp;gt;[https://www.analyticsinsight.net/transfer-learning-in-deep-learning/ Transfer Learning in Deep Learning]&amp;lt;/ref&amp;gt;&lt;br /&gt;
# Though this field of transfer learning is relatively new to machine learning, humans have used this inherently in almost every situation.&amp;lt;ref name=&amp;quot;ref_9c61f805&amp;quot; /&amp;gt;&lt;br /&gt;
# We always try to apply the knowledge gained from our past experiences when we face a new problem or task and this is the basis of transfer learning.&amp;lt;ref name=&amp;quot;ref_9c61f805&amp;quot; /&amp;gt;&lt;br /&gt;
# To understand the basic notion of Transfer Learning, consider a model X is successfully trained to perform task A with model M1.&amp;lt;ref name=&amp;quot;ref_9c61f805&amp;quot; /&amp;gt;&lt;br /&gt;
# The authors cover historic methods as well as very recent methods, classifying them into a comprehensive ontology of transfer learning methods.&amp;lt;ref name=&amp;quot;ref_ab3e480a&amp;quot;&amp;gt;[https://www.cambridge.org/core/books/transfer-learning/CCFFAFE3CDBC245047F1DEC71D9EF3C7 Transfer Learning]&amp;lt;/ref&amp;gt;&lt;br /&gt;
# Hereafter, successful applications of the shotgun transfer learning in four different scenarios will be described.&amp;lt;ref name=&amp;quot;ref_23af4c3c&amp;quot;&amp;gt;[https://pubs.acs.org/doi/10.1021/acscentsci.9b00804 Predicting Materials Properties with Little Data Using Shotgun Transfer Learning]&amp;lt;/ref&amp;gt;&lt;br /&gt;
# We first report a successful application that illustrates the analytic workflow of the transfer learning and some of its potential.&amp;lt;ref name=&amp;quot;ref_23af4c3c&amp;quot; /&amp;gt;&lt;br /&gt;
# Illustrative example of transfer learning for prediction of polymeric C P .&amp;lt;ref name=&amp;quot;ref_23af4c3c&amp;quot; /&amp;gt;&lt;br /&gt;
# The left two panels show prediction performance of a directly supervised random forest and the best transfer learning model using 58 instances of the polymeric C P under 5-fold CV.&amp;lt;ref name=&amp;quot;ref_23af4c3c&amp;quot; /&amp;gt;&lt;br /&gt;
# How do you decide what type of transfer learning you should perform on a new dataset?&amp;lt;ref name=&amp;quot;ref_9abec200&amp;quot;&amp;gt;[https://cs231n.github.io/transfer-learning/ CS231n Convolutional Neural Networks for Visual Recognition]&amp;lt;/ref&amp;gt;&lt;br /&gt;
# This form of transfer learning used in deep learning is called inductive transfer.&amp;lt;ref name=&amp;quot;ref_ae6f3b6d&amp;quot;&amp;gt;[https://machinelearningmastery.com/transfer-learning-for-deep-learning/ A Gentle Introduction to Transfer Learning for Deep Learning]&amp;lt;/ref&amp;gt;&lt;br /&gt;
# To learn more, visit the Transfer learning guide.&amp;lt;ref name=&amp;quot;ref_fde2c1a9&amp;quot;&amp;gt;[https://www.tensorflow.org/tutorials/images/transfer_learning Transfer learning and fine-tuning]&amp;lt;/ref&amp;gt;&lt;br /&gt;
# Over the course of this blog post, I will first contrast transfer learning with machine learning&amp;#039;s most pervasive and successful paradigm, supervised learning.&amp;lt;ref name=&amp;quot;ref_1c7b7608&amp;quot;&amp;gt;[https://ruder.io/transfer-learning/ Transfer Learning - Machine Learning&amp;#039;s Next Frontier]&amp;lt;/ref&amp;gt;&lt;br /&gt;
# I will then outline reasons why transfer learning warrants our attention.&amp;lt;ref name=&amp;quot;ref_1c7b7608&amp;quot; /&amp;gt;&lt;br /&gt;
# Subsequently, I will give a more technical definition and detail different transfer learning scenarios.&amp;lt;ref name=&amp;quot;ref_1c7b7608&amp;quot; /&amp;gt;&lt;br /&gt;
# I will then provide examples of applications of transfer learning before delving into practical methods that can be used to transfer knowledge.&amp;lt;ref name=&amp;quot;ref_1c7b7608&amp;quot; /&amp;gt;&lt;br /&gt;
# In this example, we will see how each of these classifiers can be implemented in a transfer learning solution for image classification.&amp;lt;ref name=&amp;quot;ref_cf3b4ddc&amp;quot;&amp;gt;[https://towardsdatascience.com/transfer-learning-from-pre-trained-models-f2393f124751 Transfer learning from pre-trained models]&amp;lt;/ref&amp;gt;&lt;br /&gt;
# The three transfer categories discussed in the previous section outline different settings where transfer learning can be applied, and studied in detail.&amp;lt;ref name=&amp;quot;ref_0ee2dbcf&amp;quot;&amp;gt;[https://towardsdatascience.com/a-comprehensive-hands-on-guide-to-transfer-learning-with-real-world-applications-in-deep-learning-212bf3b2f27a A Comprehensive Hands-on Guide to Transfer Learning with Real-World Applications in Deep Learning]&amp;lt;/ref&amp;gt;&lt;br /&gt;
# In case of inductive transfer, modifications such as AdaBoost by Dai and their co-authors help utilize training instances from the source domain for improvements in the target task.&amp;lt;ref name=&amp;quot;ref_0ee2dbcf&amp;quot; /&amp;gt;&lt;br /&gt;
# Inductive transfer techniques utilize the inductive biases of the source task to assist the target task.&amp;lt;ref name=&amp;quot;ref_0ee2dbcf&amp;quot; /&amp;gt;&lt;br /&gt;
# These pre-trained networks/models form the basis of transfer learning in the context of deep learning, or what I like to call ‘deep transfer learning’.&amp;lt;ref name=&amp;quot;ref_0ee2dbcf&amp;quot; /&amp;gt;&lt;br /&gt;
# In 1976 Stevo Bozinovski and Ante Fulgosi published a paper explicitly addressing transfer learning in neural networks training.&amp;lt;ref name=&amp;quot;ref_3bba5fa0&amp;quot;&amp;gt;[https://en.wikipedia.org/wiki/Transfer_learning Transfer learning]&amp;lt;/ref&amp;gt;&lt;br /&gt;
# The paper gives a mathematical and geometrical model of transfer learning.&amp;lt;ref name=&amp;quot;ref_3bba5fa0&amp;quot; /&amp;gt;&lt;br /&gt;
# In 1981 a report was given on application of transfer learning in training a neural network on a dataset of images representing letters of computer terminals.&amp;lt;ref name=&amp;quot;ref_3bba5fa0&amp;quot; /&amp;gt;&lt;br /&gt;
# Both positive and negative transfer learning was experimentally demonstrated.&amp;lt;ref name=&amp;quot;ref_3bba5fa0&amp;quot; /&amp;gt;&lt;br /&gt;
# Combined with the idea of transfer learning, the problem of label-free transfer in the target domain was solved.&amp;lt;ref name=&amp;quot;ref_17b3f591&amp;quot;&amp;gt;[https://www.mdpi.com/2076-3417/10/7/2361/htm Transfer Learning Strategies for Deep Learning-based PHM Algorithms]&amp;lt;/ref&amp;gt;&lt;br /&gt;
# In Section 2 , the background information of transfer learning is outlined and the transfer scenarios are defined according to the data situation of the target domain and the source domain.&amp;lt;ref name=&amp;quot;ref_17b3f591&amp;quot; /&amp;gt;&lt;br /&gt;
# Transfer learning is a research problem in machine learning that focuses on storing knowledge gained while solving one problem and applying it to a different but related problem.&amp;lt;ref name=&amp;quot;ref_cee180eb&amp;quot;&amp;gt;[https://alexmoltzau.medium.com/what-is-transfer-learning-6ebb03be77ee What is Transfer Learning?]&amp;lt;/ref&amp;gt;&lt;br /&gt;
# I stil have not made up my mind, but transfer learning is a topic that I will have to pursue further.&amp;lt;ref name=&amp;quot;ref_cee180eb&amp;quot; /&amp;gt;&lt;br /&gt;
# This is where a technique called ‘transfer learning’ comes in.&amp;lt;ref name=&amp;quot;ref_4e373ee0&amp;quot;&amp;gt;[https://www.thinkautomation.com/eli5/transfer-learning-in-laymans-terms/ Transfer learning in layman’s terms]&amp;lt;/ref&amp;gt;&lt;br /&gt;
# In transfer learning, you have a source model trained on a specific dataset.&amp;lt;ref name=&amp;quot;ref_4e373ee0&amp;quot; /&amp;gt;&lt;br /&gt;
# Transfer learning means you’re not starting from scratch – thereby speeding up training time.&amp;lt;ref name=&amp;quot;ref_4e373ee0&amp;quot; /&amp;gt;&lt;br /&gt;
# Beyond the observable benefits, perfecting transfer learning techniques could bring us closer to artificial general intelligence (AGI).&amp;lt;ref name=&amp;quot;ref_4e373ee0&amp;quot; /&amp;gt;&lt;br /&gt;
# As described above, the ULMFiT is a three-stage transfer learning process that includes two types of models: language models and classification/regression models.&amp;lt;ref name=&amp;quot;ref_2439392e&amp;quot;&amp;gt;[https://jcheminf.biomedcentral.com/articles/10.1186/s13321-020-00430-x Inductive transfer learning for molecular activity prediction: Next - Gen QSAR Models with MolPMoFiT]&amp;lt;/ref&amp;gt;&lt;br /&gt;
# Recall that homogeneous transfer learning is the case where \({\mathcal{X}}_{{\mathcal{S}}} = {\mathcal{X}}_{{\mathcal{T}}}\).&amp;lt;ref name=&amp;quot;ref_33f4f99f&amp;quot;&amp;gt;[https://journalofbigdata.springeropen.com/articles/10.1186/s40537-016-0043-6 A survey of transfer learning]&amp;lt;/ref&amp;gt;&lt;br /&gt;
# In a transfer learning environment, there are scenarios where a feature in the source domain may have a different meaning in the target domain.&amp;lt;ref name=&amp;quot;ref_33f4f99f&amp;quot; /&amp;gt;&lt;br /&gt;
# These transfer learning approaches only attempt to correct for marginal distribution differences between domains.&amp;lt;ref name=&amp;quot;ref_33f4f99f&amp;quot; /&amp;gt;&lt;br /&gt;
# All transfer learning approaches perform better than the baseline approaches.&amp;lt;ref name=&amp;quot;ref_33f4f99f&amp;quot; /&amp;gt;&lt;br /&gt;
# We also theoretically analyse the algorithmic stability and generalization bound of L2T, and empirically demonstrate its superiority over several state-of-the-art transfer learning algorithms.&amp;lt;ref name=&amp;quot;ref_1b360b69&amp;quot;&amp;gt;[http://proceedings.mlr.press/v80/wei18a.html Transfer Learning via Learning to Transfer]&amp;lt;/ref&amp;gt;&lt;br /&gt;
# Transfer Learning via Learning to Transfer.&amp;lt;ref name=&amp;quot;ref_1b360b69&amp;quot; /&amp;gt;&lt;br /&gt;
# This is where transfer learning comes into play.&amp;lt;ref name=&amp;quot;ref_78c4967c&amp;quot;&amp;gt;[https://bdtechtalks.com/2019/06/10/what-is-transfer-learning/ What is transfer learning?]&amp;lt;/ref&amp;gt;&lt;br /&gt;
# Transfer learning doesn’t require huge compute resources.&amp;lt;ref name=&amp;quot;ref_78c4967c&amp;quot; /&amp;gt;&lt;br /&gt;
# When doing transfer learning, AI engineers freeze the first layers of the pretrained neural network.&amp;lt;ref name=&amp;quot;ref_78c4967c&amp;quot; /&amp;gt;&lt;br /&gt;
# Transfer learning wolves many of the problems of training AI models in an efficient and affordable way.&amp;lt;ref name=&amp;quot;ref_78c4967c&amp;quot; /&amp;gt;&lt;br /&gt;
# In this work, we extend transfer learning with semi-supervised learning to exploit unlabeled instances of (novel) categories with no or only a few labeled instances.&amp;lt;ref name=&amp;quot;ref_81b8f5c8&amp;quot;&amp;gt;[http://papers.nips.cc/paper/5209-transfer-learning-in-a-transductive-setting Paper]&amp;lt;/ref&amp;gt;&lt;br /&gt;
# Transfer learning is a machine learning method that involves reusing an existing, trained neural network, developed for one task, as the foundation for another task.&amp;lt;ref name=&amp;quot;ref_1c8c2b66&amp;quot;&amp;gt;[https://analyticsengines.com/2019/11/29/insights-transfer-learning-doing-more-with-much-less/ Transfer Learning – Doing more with (much) less…]&amp;lt;/ref&amp;gt;&lt;br /&gt;
# The main challenge of transfer learning is to retain the existing knowledge in the model while adapting the model to your own task.&amp;lt;ref name=&amp;quot;ref_1c8c2b66&amp;quot; /&amp;gt;&lt;br /&gt;
# Transfer learning works with neural networks in a way that it does not with the simpler one-layer models such as logistic regression.&amp;lt;ref name=&amp;quot;ref_1c8c2b66&amp;quot; /&amp;gt;&lt;br /&gt;
# Transfer learning works with neural networks as the different layers of the network can be treated differently.&amp;lt;ref name=&amp;quot;ref_1c8c2b66&amp;quot; /&amp;gt;&lt;br /&gt;
# How to use transfer learning to build state-of-the-art customer service AI!&amp;lt;ref name=&amp;quot;ref_d4d394fc&amp;quot;&amp;gt;[https://www.ultimate.ai/blog/ai-automation/transfer-learning-in-customer-service-automation Transfer Learning in Customer Service Automation]&amp;lt;/ref&amp;gt;&lt;br /&gt;
# Transfer learning is a method that allows us to use the knowledge gained from other tasks in order to tackle new but similar problems quickly and effectively.&amp;lt;ref name=&amp;quot;ref_d4d394fc&amp;quot; /&amp;gt;&lt;br /&gt;
# Solving the Finnish problem with transfer learning prompted us to develop our architecture to use a single model across all clients and regions.&amp;lt;ref name=&amp;quot;ref_d4d394fc&amp;quot; /&amp;gt;&lt;br /&gt;
# More interestingly, by being able to apply ways of thinking from one task to another, transfer learning unlocks deep learning potential from smaller datasets.&amp;lt;ref name=&amp;quot;ref_d4d394fc&amp;quot; /&amp;gt;&lt;br /&gt;
# Transfer learning, in which a network is trained on one task and re-purposed on another, is often used to produce neural network classifiers when data is scarce or full-scale training is too costly.&amp;lt;ref name=&amp;quot;ref_75c4c082&amp;quot;&amp;gt;[https://openreview.net/forum?id=ryebG04YvB Adversarially robust transfer learning]&amp;lt;/ref&amp;gt;&lt;br /&gt;
# We consider robust transfer learning, in which we transfer not only performance but also robustness from a source model to a target domain.&amp;lt;ref name=&amp;quot;ref_75c4c082&amp;quot; /&amp;gt;&lt;br /&gt;
# Recently, transfer learning methods have been applied to reuse knowledge from performance models trained in one environment in another.&amp;lt;ref name=&amp;quot;ref_ba4ac57b&amp;quot;&amp;gt;[https://www.usenix.org/conference/opml19/presentation/iqbal Transfer Learning for Performance Modeling of Deep Neural Network Systems]&amp;lt;/ref&amp;gt;&lt;br /&gt;
# In this paper, we perform an empirical study to understand the effectiveness of different transfer learning strategies for building performance models of DNN systems.&amp;lt;ref name=&amp;quot;ref_ba4ac57b&amp;quot; /&amp;gt;&lt;br /&gt;
# Transfer learning is the application of knowledge gained from completing one task to help solve a different, but related, problem.&amp;lt;ref name=&amp;quot;ref_160f5e3c&amp;quot;&amp;gt;[https://searchcio.techtarget.com/definition/transfer-learning What is transfer learning?]&amp;lt;/ref&amp;gt;&lt;br /&gt;
# Through transfer learning, methods are developed to transfer knowledge from one or more of these source tasks to improve learning in a related target task.&amp;lt;ref name=&amp;quot;ref_160f5e3c&amp;quot; /&amp;gt;&lt;br /&gt;
# Transfer learning theory During transfer learning, knowledge is leveraged from a source task to improve learning in a new task.&amp;lt;ref name=&amp;quot;ref_160f5e3c&amp;quot; /&amp;gt;&lt;br /&gt;
# Transfer learning is an approach used in machine learning where a model that was created and trained for one task, is reused as the starting point for a secondary task.&amp;lt;ref name=&amp;quot;ref_4d8dda78&amp;quot;&amp;gt;[https://missinglink.ai/guides/neural-network-concepts/transfer-learning-overview/ Transfer Learning: An Overview]&amp;lt;/ref&amp;gt;&lt;br /&gt;
# Transfer learning is a widely used technique for improving the performance of neural networks when labeled training data is scarce.&amp;lt;ref name=&amp;quot;ref_cbc4fb65&amp;quot;&amp;gt;[https://www.amazon.science/blog/when-does-transfer-learning-work When does transfer learning work?]&amp;lt;/ref&amp;gt;&lt;br /&gt;
# When is transfer learning effective, and when is it not?&amp;lt;ref name=&amp;quot;ref_cbc4fb65&amp;quot; /&amp;gt;&lt;br /&gt;
# And if you’re going to do transfer learning, what task should you use for pretraining?&amp;lt;ref name=&amp;quot;ref_cbc4fb65&amp;quot; /&amp;gt;&lt;br /&gt;
# One of the settings we considered was that of meta-transfer learning, which is a combination of transfer learning and meta-learning.&amp;lt;ref name=&amp;quot;ref_cbc4fb65&amp;quot; /&amp;gt;&lt;br /&gt;
# Transfer learning reduces the size of a training dataset by utilizing the knowledge in a pre-trained neural network.&amp;lt;ref name=&amp;quot;ref_fb804ccd&amp;quot;&amp;gt;[https://www.nature.com/articles/s41598-020-64165-3 Stepwise PathNet: a layer-by-layer knowledge-selection-based transfer learning algorithm]&amp;lt;/ref&amp;gt;&lt;br /&gt;
# In some transfer learning cases, the pre-trained neural network for the source task has been trained by a large computer.&amp;lt;ref name=&amp;quot;ref_fb804ccd&amp;quot; /&amp;gt;&lt;br /&gt;
===소스===&lt;br /&gt;
 &amp;lt;references /&amp;gt;&lt;/div&gt;</summary>
		<author><name>Pythagoras0</name></author>
	</entry>
</feed>