"알파고 제로"의 두 판 사이의 차이
둘러보기로 가기
검색하러 가기
Pythagoras0 (토론 | 기여) (→메타데이터: 새 문단) |
Pythagoras0 (토론 | 기여) |
||
97번째 줄: | 97번째 줄: | ||
<references /> | <references /> | ||
− | == 메타데이터 == | + | ==메타데이터== |
− | |||
===위키데이터=== | ===위키데이터=== | ||
* ID : [https://www.wikidata.org/wiki/Q42259287 Q42259287] | * ID : [https://www.wikidata.org/wiki/Q42259287 Q42259287] | ||
+ | ===Spacy 패턴 목록=== | ||
+ | * [{'LOWER': 'alphago'}, {'LEMMA': 'zero'}] | ||
+ | * [{'LEMMA': 'AG0'}] | ||
+ | * [{'LEMMA': 'AGZ'}] |
2021년 2월 17일 (수) 00:27 기준 최신판
노트
위키데이터
- ID : Q42259287
말뭉치
- AlphaGo Zero was able to defeat its predecessor in only three days time with lesser processing power than AlphaGo.[1]
- All these facts beg the questions: what makes AlphaGo Zero so exceptional?[1]
- Simply put, AlphaGo Zero is the strongest Go program in the world (with the exception of AlphaZero).[1]
- But AlphaGo Zero didn’t use any human data whatsoever.[1]
- In October 2017, the DeepMind team published details of a new Go-playing system, AlphaGo Zero, that studied no human games at all.[2]
- At the end of these scrimmages, AlphaGo Zero went head to head with the already superhuman version of AlphaGo that had beaten Lee Sedol.[2]
- But perhaps even more significant than these victories is how AlphaGo Zero became so dominant.[3]
- AlphaGo Zero even devised its own unconventional strategies.[3]
- This tutorial walks through a synchronous single-thread single-GPU (read malnourished) game-agnostic implementation of the recent AlphaGo Zero paper by DeepMind.[4]
- The methods are fairly simple compared to previous papers by DeepMind, and AlphaGo Zero ends up beating AlphaGo (trained using data from expert games and beat the best human Go players) convincingly.[4]
- Recently, DeepMind published a preprint of Alpha Zero on arXiv that extends AlphaGo Zero methods to Chess and Shogi.[4]
- The aim of this post is to distil out the key ideas from the AlphaGo Zero paper and understand them concretely through code.[4]
- It is able to do this by using a novel form of reinforcement learning, in which AlphaGo Zero becomes its own teacher.[5]
- This updated neural network is then recombined with the search algorithm to create a new, stronger version of AlphaGo Zero, and the process begins again.[5]
- Google later developed AlphaZero, a generalized version of AlphaGo Zero that could play chess and Shōgi in addition to Go.[6]
- In December 2017, AlphaZero beat the 3-day version of AlphaGo Zero by winning 60 games to 40, and with 8 hours of training it outperformed AlphaGo Lee on an Elo scale.[6]
- AlphaGo Zero's neural network was trained using TensorFlow, with 64 GPU workers and 19 CPU parameter servers.[6]
- AlphaGo Zero was widely regarded as a significant advance, even when compared with its groundbreaking predecessor, AlphaGo.[6]
- The new AlphaGo Zero beat the previous version by 100 games to 0, and learned Go completely on its own.[7]
- In October, DeepMind published a paper describing a new version of AlphaGo, called AlphaGo Zero.[8]
- Not only that, AlphaGo Zero learned to play Go without any prior knowledge of the game (in other words, tabula rasa).[8]
- AlphaGo Zero learns to play Go by simulating matches against itself in a procedure referred to as self-play.[8]
- This is not to say that AlphaGo Zero is not an amazing achievement (it is!).[8]
- Remarkably, during this self-teaching process AlphaGo Zero discovered many of the tricks and techniques that human Go players have developed over the past several thousand years.[9]
- AlphaGo Zero simply played against itself, randomly at first.[9]
- This is changed when DeepMind released AlphaGo Zero in late 2017.[10]
- We may expect AlphaGo Zero is more complicated and harder to train.[10]
- In AlphaGo Zero, we train a single network f using self-play games.[10]
- After three days of self-play using hugely powerful computer systems that let it play games at superhuman speeds, AlphaGo Zero was able to defeat its predecessor 100 games to nil.[11]
- To address this challenge, we start by taking steps towards developing a formal understanding of AGZ.[12]
- They had to make a newer and better version called AlphaGo Zero.[13]
- AlphaGo Zero uses only reinforcement learning.[13]
- AlphaGo Zero uses only 1 neural network.[13]
- AlphaGo Zero has some very similar features to AlphaGo Lee, but its distinct differences are what makes the new version so dominant.[13]
- Among their accomplishments, particular focus will be placed upon the recent success of AlphaGo Zero which made waves in the machine learning and artificial intelligence communities.[14]
- The result, AlphaGo Zero, detailed in a paper published in October, 2017, was so called because it had zero knowledge of Go beyond the rules.[15]
- When the AlphaGo Zero and AlphaZero papers were published, a small army of enthusiasts began describing the systems in blog posts and YouTube videos and building their own copycat versions.[15]
- The distributed LeelaZero community has had their system play more than ten million games against itself—a little more than AlphaGo Zero.[15]
- The latest updates present in AlphaGo Zero left researchers in awe.[16]
- With AlphaGo Zero, DeepMind pushed RL’s independence from data further by starting with 100% randomness.[16]
- AlphaGo Zero is much less demanding than old Alphago, but running the same setup would still take 1700 GPU-years with ordinary hardware.[17]
- This is the setup where Nochi was the first AlphaGo Zero replication that achieved the level of the GNU Go baseline.[17]
- Several other efforts to replicate the success of AlphaGo Zero are now underway – e.g. Leela Zero and Odin Zero.[17]
- There were many advances in Deep Learning and AI in 2017, but few generated as much publicity and interest as DeepMind’s AlphaGo Zero.[18]
- In this essay, I’ll try to give an intuitive idea of the techniques AlphaGo Zero used, what made them work, and what the implications for future AI research are.[18]
- This data, generated purely via lookahead and self-play, is what DeepMind used to train AlphaGo Zero.[18]
- AlphaGo Zero’s was its neural network architecture, a “two-headed” architecture.[18]
- A simplified, highly flexible, commented and (hopefully) easy to understand implementation of self-play based reinforcement learning based on the AlphaGo Zero paper (Silver et al).[19]
- If you follow artificial intelligence research, you probably saw last week's Nature article on DeepMind's AlphaGo Zero.[20]
- This is the promise that AlphaGo Zero represents.[20]
- While previous versions learned from thousands of human amateur and professional games, AlphaGo Zero taught itself to play by receiving only the goal (win), the rules, and feedback on its success.[20]
- Even so, he notes, AlphaGo Zero is an important advance.[20]
- AI research arm, DeepMind, announced in an article in the journal Nature today that it has built a final version of its prolific digital Go master: AlphaGo Zero.[21]
- The most important new modification is how AlphaGo Zero learned to master the game.[21]
- AlphaGo Zero doesn’t have hints from humans that previous systems had, like which pieces are whose or how to interpret the board.[21]
- AlphaGo Zero could beat the version of AlphaGo that faced Lee Sedol after training for just 36 hours and earned its 100-o score after 72 hours.[21]
- AlphaGo Zero used a Deep Reinforcement Learning approach and a tree-based search strategy.[22]
- Besides the human-like ingenuity displayed in learning, the reason why AlphaGo Zero is a big step forward is because the system got rid of the supervision and feature engineering.[22]
- At NIPS 2017, Deep Reinforcement Learning was the most popular topic and DeepMind has delivered great results with AlphaGo Zero which plays at superhuman level.[22]
- A new and much more powerful version of the program called AlphaGo Zero unveiled Wednesday is even more capable of surprises.[23]
- AlphaGo Zero showcases an approach to teaching machines new tricks that makes them less reliant on humans.[23]
- DeepMind CEO Demis Hassabis said in a press briefing Monday that the guts of AlphaGo Zero should be adaptable to scientific problems such as drug discovery, or understanding protein folding.[23]
- AlphaGo Zero is so-named because it doesn’t need human knowledge to get started, relying solely on that self-play mechanism.[23]
- AlphaGo Zero was built on an improved reinforcement-learning system, and it trained itself from scratch without any input from human games.[24]
- AlphaGo Zero AI program just became the Go champion of the world without human data or guidance.[25]
- AlphaGo Zero was not shown a single human game of Go from which to learn.[25]
- AlphaGo Zero learned entirely from playing against itself, with no prior knowledge of the game.[25]
- After just three days of playing against itself (4.9 million times), AlphaGo Zero beat AlphaGo by 100 games to 0.[25]
- DeepMind has shaken the world of Reinforcement Learning and Go with its creation AlphaGo, and later AlphaGo Zero.[26]
- AlphaGo Zero pipeline is divided into three main components (just like the previous article on World Models), each in a different process that runs the code asynchronously.[26]
- This is also why AlphaGo Zero is sometimes called the two headed beast : a body, which is the feature extractor, and two heads : policy and value.[26]
- I want to thank my school’s AI association for letting me use the server to try to train this implementation of AlphaGo Zero.[26]
- Using less computing power and only three days of training time, AlphaGo Zero beat the original AlphaGo in a 100-game match by 100 to 0.[27]
- By contrast, AlphaGo Zero never saw humans play.[27]
- AlphaGo Zero achieved this feat by approaching the problem differently from the original AlphaGo.[27]
- In the pure reinforcement learning approach of AlphaGo Zero, the only information available to learn policies and values was for it to predict who might ultimately win.[27]
- That Go-playing virtual intelligence was called AlphaGo Zero, and it managed to rediscover over 3,000 years of human knowledge around the game in just 72 hours.[28]
- Perhaps the most interesting thing about AlphaGo Zero, though, isn’t how fast it was able to do what it did, or with such efficacy, but also that it ultimately didn’t even achieve its full potential.[28]
- Named AlphaGo Zero, the AI program has been hailed as a major advance because it mastered the ancient Chinese board game from scratch, and with no human help beyond being told the rules.[29]
- Twitter Pinterest David Silver describes how the Go playing AI program, AlphaGo Zero, discovers new knowledge from scratch.[29]
- When AlphaGo Zero plays a good move, it is more likely to be rewarded with a win.[29]
- Though far better than previous versions, AlphaGo Zero is a simpler program and mastered the game faster despite training on less data and running on a smaller computer.[29]
- In this way, AlphaGo Zero begins as a clean slate and learns from itself.[30]
- Revealed in October 2017, AlphaGo Zero was the first computer program that learns to play simply by playing games against itself, starting from completely random play.[31]
- Point being: AlphaGo Zero (which we’ll go ahead and shorten to AG0) is arguably the most impressive and definitely the most praised recent AI accomplishment.[32]
- Roughly speaking, AG0 is just a Deep Neural Network that takes the current state of a Go board as input, and outputs a Go move.[32]
- With those positive things having been said, some perspective: AG0 is not really a testament to the usefulness of such techniques for solving the hard problems of AI.[32]
- AG0 is a definite example of Weak AI, also known as narrow AI.[32]
- AlphaGo Zero scored 20-0.[33]
- AlphaGo Zero scored 17-3.[33]
소스
- ↑ 1.0 1.1 1.2 1.3 Why DeepMind AlphaGo Zero is a game changer for AI research
- ↑ 2.0 2.1 Quanta Magazine
- ↑ 3.0 3.1 AI versus AI: Self-Taught AlphaGo Zero Vanquishes Its Predecessor
- ↑ 4.0 4.1 4.2 4.3 Simple Alpha Zero
- ↑ 5.0 5.1 AlphaGo Zero: Starting from scratch
- ↑ 6.0 6.1 6.2 6.3 AlphaGo Zero
- ↑ AlphaGo Zero: The Most Significant Research Advance in AI
- ↑ 8.0 8.1 8.2 8.3 How much did AlphaGo Zero cost?
- ↑ 9.0 9.1 AlphaGo Zero Shows Machines Can Become Superhuman Without Any Help
- ↑ 10.0 10.1 10.2 AlphaGo Zero — a game changer. (How it works?)
- ↑ Former Go champion beaten by DeepMind retires after declaring AI invincible
- ↑ Understanding & Generalizing AlphaGo Zero
- ↑ 13.0 13.1 13.2 13.3 Enjoy the GO with Alpha Go
- ↑ Overview on DeepMind and Its AlphaGo Zero AI
- ↑ 15.0 15.1 15.2 How the Artificial Intelligence Program AlphaZero Mastered Its Games
- ↑ 16.0 16.1 AlphaGo Master vs AlphaGo Zero - The Power of Reinforcement Learning
- ↑ 17.0 17.1 17.2 Building Our Own Version of AlphaGo Zero
- ↑ 18.0 18.1 18.2 18.3 dThe 3 Tricks That Made AlphaGo Zero Work
- ↑ suragnair/alpha-zero-general: A clean implementation based on AlphaZero for any game in any framework + tutorial + Othello/Gobang/TicTacToe/Connect4 and more
- ↑ 20.0 20.1 20.2 20.3 What AlphaGo Zero Means for Artificial Intelligence Drug Discovery
- ↑ 21.0 21.1 21.2 21.3 DeepMind has a bigger plan for its newest Go-playing AI
- ↑ 22.0 22.1 22.2 Is DeepMind’s AlphaGo Zero Really A Scientific Breakthrough?
- ↑ 23.0 23.1 23.2 23.3 This More Powerful Version of AlphaGo Learns On Its Own
- ↑ Chapter 14. AlphaGo Zero: Integrating tree search with reinforcement learning · Deep Learning and the Game of Go
- ↑ 25.0 25.1 25.2 25.3 DeepMind’s AlphaGo Zero Becomes Go Champion Without Human Input
- ↑ 26.0 26.1 26.2 26.3 AlphaGo Zero demystified
- ↑ 27.0 27.1 27.2 27.3 Google AlphaGo Zero masters the game in three days
- ↑ 28.0 28.1 DeepMind has yet to find out how smart its AlphaGo Zero AI could be – TechCrunch
- ↑ 29.0 29.1 29.2 29.3 'It's able to create knowledge itself': Google unveils AI that learns on its own
- ↑ New AI Learns From Scratch
- ↑ Feature: One man's Go program looks to remake AlphaGo Zero - and beyond - Xinhua
- ↑ 32.0 32.1 32.2 32.3 AlphaGo Zero Is Not A Sign of Imminent Human-Level AI
- ↑ 33.0 33.1 AlphaGo
메타데이터
위키데이터
- ID : Q42259287
Spacy 패턴 목록
- [{'LOWER': 'alphago'}, {'LEMMA': 'zero'}]
- [{'LEMMA': 'AG0'}]
- [{'LEMMA': 'AGZ'}]