"병렬 컴퓨팅"의 두 판 사이의 차이

수학노트
둘러보기로 가기 검색하러 가기
(→‎노트: 새 문단)
 
 
(같은 사용자의 중간 판 하나는 보이지 않습니다)
54번째 줄: 54번째 줄:
 
===소스===
 
===소스===
 
  <references />
 
  <references />
 +
 +
==메타데이터==
 +
===위키데이터===
 +
* ID :  [https://www.wikidata.org/wiki/Q232661 Q232661]
 +
===Spacy 패턴 목록===
 +
* [{'LOWER': 'parallel'}, {'LEMMA': 'computing'}]
 +
* [{'LEMMA': 'parallelism'}]
 +
* [{'LOWER': 'parallel'}, {'LEMMA': 'processing'}]

2021년 2월 17일 (수) 01:06 기준 최신판

노트

위키데이터

말뭉치

  1. Parallel computing has long played a vital role in addressing the performance demands of high-end engineering and scientific applications.[1]
  2. Over the last decade, parallel computing has become important to a much broader audience as the regular increases in clock speed that previously fueled performance increases became infeasible.[1]
  3. Almost all increases in the performance of recent processors comes from more parallelism rather than increases in clock speed.[1]
  4. Parallel computing has become an important subject in the field of computer science and has proven to be critical when researching high performance solutions.[2]
  5. The evolution of computer architectures (multi-core and many-core) towards a higher number of cores can only confirm that parallelism is the method of choice for speeding up an algorithm.[2]
  6. In the last decade, the graphics processing unit, or GPU, has gained an important place in the field of high performance computing (HPC) because of its low cost and massive parallel processing power.[2]
  7. In this paper, we survey the concept of parallel computing and especially GPU computing.[2]
  8. Definition - What does Parallel Computing mean?[3]
  9. Parallel computing is a form of computation in which two or more processors are used to solve a problem or task.[4]
  10. It’s mainly for this reason that parallel computing became established as the dominant paradigm.[4]
  11. Instruction-level parallelism : a program’s instructions are reordered and grouped for parallel execution.[4]
  12. Parallel computing is defined as ‘Involving the concurrent or simultaneous performance of certain operations’.[5]
  13. Within this context the journal covers all aspects of high-end parallel computing from single homogeneous or heterogenous computing nodes to large-scale multi-node systems.[6]
  14. If parallel computing has a central tenet, that might be it.[7]
  15. Parallel Processing Applications Parallel processing refers to the speeding up a computational task by dividing it into smaller jobs across multiple processors.[7]
  16. When you tap the Weather Channel app on your phone to check the day’s forecast, thank parallel processing.[7]
  17. Parallel computing is the backbone of other scientific studies, too, including astrophysic simulations, seismic surveying, quantum chromodynamics and more.[7]
  18. Increasingly, parallel processing is being seen as the only cost-effective method for the fast solution of computationally large and data-intensive problems.[8]
  19. The Wolfram Language provides a uniquely integrated and automated environment for parallel computing.[9]
  20. The popularization and evolution of parallel computing in the 21st century came in response to processor frequency scaling hitting the power wall.[10]
  21. The importance of parallel computing continues to grow with the increasing usage of multicore processors and GPUs.[10]
  22. Parallel computer architecture exists in a wide variety of parallel computers, classified according to the level at which the hardware supports parallelism.[10]
  23. There is much overlap in distributed and parallel computing and the terms are sometimes used interchangeably.[10]
  24. Bit-level parallelism: It is the form of parallel computing which is based on the increasing processor’s size.[11]
  25. Instruction-level parallelism: A processor can only address less than one instruction for each clock cycle phase.[11]
  26. This is called instruction-level parallelism.[11]
  27. Task parallelism employs the decomposition of a task into subtasks and then allocating each of the subtasks for execution.[11]
  28. As stated above, there are two ways to achieve parallelism in computing.[12]
  29. most programmers have little or no experience with parallel computing, and there are few parallel programs to use off-the shelf or even good examples to copy from.[13]
  30. Hence people often have to reinvent the parallel wheel (see "Parallelism needs Classes for the Masses").[13]
  31. Yet other computers exploit graphics processor units, GPUs, to achieve parallelism.[13]
  32. Overall, this means that there is a massive need to make use of the parallelism in multi-core chips for almost any problem, and to use many of these combined together for large problems.[13]
  33. If you’re at all involved in tech, chances are you’ve heard about parallel computing.[14]
  34. Parallel computing uses multiple computer cores to attack several operations at once.[14]
  35. Without parallel computing, performing digital tasks would be tedious, to say the least.[14]
  36. The advantages of parallel computing are that computers can execute code more efficiently, which can save time and money by sorting through “big data” faster than ever.[14]
  37. It is intended to provide only a brief overview of the extensive and broad topic of Parallel Computing, as a lead-in for the tutorials that follow it.[15]
  38. The tutorial begins with a discussion on parallel computing - what it is and how it's used, followed by a discussion on concepts and terminology associated with parallel computing.[15]
  39. It soon becomes obvious that there are limits to the scalability of parallelism.[15]
  40. Can be very easy and simple to use - provides for "incremental parallelism".[15]
  41. Parallel computing, on the other hand, uses multiple processing elements simultaneously to solve a problem.[16]
  42. Multi-core processors have brought parallel computing to desktop computers.[16]
  43. Without instruction-level parallelism, a processor can only issue less than one instruction per clock cycle (IPC < 1).[16]
  44. This is known as instruction-level parallelism.[16]
  45. In this introductory chapter some views regarding state-of-the-art and trends in parallelism are given, accompanied by a summary of individual chapters.[17]
  46. These days, many computational libraries have built-in parallelism that can be used behind the scenes.[18]
  47. Usually, this kind of “hidden parallelism” will generally not affect you and will improve you computational efficiency.[18]
  48. Parallel computing is the execution of a computer program utilizing multiple computer processors (CPU) concurrently instead of using one processor exclusively.[19]

소스

메타데이터

위키데이터

Spacy 패턴 목록

  • [{'LOWER': 'parallel'}, {'LEMMA': 'computing'}]
  • [{'LEMMA': 'parallelism'}]
  • [{'LOWER': 'parallel'}, {'LEMMA': 'processing'}]