"음성 인식"의 두 판 사이의 차이

수학노트
둘러보기로 가기 검색하러 가기
(→‎노트: 새 문단)
 
 
(같은 사용자의 중간 판 하나는 보이지 않습니다)
81번째 줄: 81번째 줄:
 
===소스===
 
===소스===
 
  <references />
 
  <references />
 +
 +
==메타데이터==
 +
===위키데이터===
 +
* ID :  [https://www.wikidata.org/wiki/Q189436 Q189436]
 +
===Spacy 패턴 목록===
 +
* [{'LOWER': 'speech'}, {'LEMMA': 'recognition'}]
 +
* [{'LOWER': 'automatic'}, {'LOWER': 'speech'}, {'LEMMA': 'recognition'}]
 +
* [{'LEMMA': 'ASR'}]
 +
* [{'LOWER': 'computer'}, {'LOWER': 'speech'}, {'LEMMA': 'recognition'}]
 +
* [{'LEMMA': 'STT'}]
 +
* [{'LOWER': 'speech'}, {'LOWER': 'to'}, {'LEMMA': 'text'}]

2021년 2월 17일 (수) 01:17 기준 최신판

노트

위키데이터

말뭉치

  1. Note : On some browsers, like Chrome, using Speech Recognition on a web page involves a server-based recognition engine.[1]
  2. IBM has had a prominent role within speech recognition since its inception, releasing of “Shoebox” in 1962.[2]
  3. This speech recognition software had a 42,000-word vocabulary, supported English and Spanish, and included a spelling dictionary of 100,000 words.[2]
  4. Meanwhile, speech recognition continues to advance.[2]
  5. Speech recognition technology is evaluated on its accuracy rate, i.e. word error rate (WER), and speed.[2]
  6. Dictation uses Google Speech Recognition to transcribe your spoken words into text.[3]
  7. Speech recognition, or speech-to-text, is the ability for a machine or program to identify words spoken aloud and convert them into readable text.[4]
  8. Rudimentary speech recognition software has a limited vocabulary of words and phrases, and it may only identify these if they are spoken very clearly.[4]
  9. Speech recognition incorporates different fields of research in computer science, linguistics and computer engineering.[4]
  10. It is important to note the terms speech recognition and voice recognition are sometimes used interchangeably.[4]
  11. It is also known as automatic speech recognition (ASR), computer speech recognition or speech to text (STT).[5]
  12. Some speech recognition systems require "training" (also called "enrollment") where an individual speaker reads text or isolated vocabulary into the system.[5]
  13. Raj Reddy was the first person to take on continuous speech recognition as a graduate student at Stanford University in the late 1960s.[5]
  14. 1971 – DARPA funded five years for Speech Understanding Research, speech recognition research seeking a minimum vocabulary size of 1,000 words.[5]
  15. Speech adaptation Customize speech recognition to transcribe domain-specific terms and rare words by providing hints and boost your transcription accuracy of specific words or phrases.[6]
  16. On-Prem Have full control over your infrastructure and protected speech data while leveraging Google’s speech recognition technology on-premises , right in your own private data centers.[6]
  17. The history of speech recognition technology has been a long and winding one.[7]
  18. Speech recognition technology works in essentially the same way.[7]
  19. What’s kept speech recognition from becoming the dominant form of computing as of yet is its unreliability.[7]
  20. After all, speech recognition accuracy is what determines whether these voice assistants becomes a can’t-live-without feature.[7]
  21. If you don't see a dialog box that says "Welcome to Speech Recognition Voice Training," then in the search box on the taskbar, type Control Panel, and select Control Panel in the list of results.[8]
  22. ASR systems that are extremely reliable, flexible, and easy to use are available for use as full-function keyboard and for mouse emulation.[9]
  23. Microsoft Vista includes ASR as part of the built-in package of accessories.[9]
  24. CASE STUDY Evaluation and Selection of Speech Recognition Marilyn Abraham is a 44-year-old woman who has been diagnosed as having reflex sympathetic dystrophy (RSD) of both wrists.[9]
  25. Two basic types of ASR systems exist.[9]
  26. This specification defines a JavaScript API to enable web developers to incorporate speech recognition and synthesis into their web pages.[10]
  27. It enables developers to use scripting to generate text-to-speech output and to use speech recognition as an input for forms, continuous dictation and control.[10]
  28. The API itself is agnostic of the underlying speech recognition and synthesis implementation and can support both server-based and client-based/embedded recognition and synthesis.[10]
  29. The DOM Level 2 Event Model is used for speech recognition events.[10]
  30. This class provides access to the speech recognition service.[11]
  31. The implementation of this API is likely to stream audio to remote servers to perform speech recognition.[11]
  32. ) Cancels the speech recognition.[11]
  33. Cancels the speech recognition.[11]
  34. Speech recognition, the ability of devices to respond to spoken commands.[12]
  35. Speech recognition enables hands-free control of various devices and equipment (a particular boon to many disabled persons), provides input to automatic translation, and creates print-ready dictation.[12]
  36. Among the earliest applications for speech recognition were automated telephone systems and medical dictation software.[12]
  37. It is the digital signal that a speech recognition program analyzes in order to recognize separate phonemes, the basic building blocks of speech.[12]
  38. Voice assistive technologies, which enable users to employ voice commands to interact with their devices, rely on accurate speech recognition to ensure responsiveness to a specific user.[13]
  39. But in many real-world use cases, the input to such technologies often consists of overlapping speech, which poses great challenges to many speech recognition algorithms.[13]
  40. We are excited about adopting the same technology to improve speech recognition for more languages.[13]
  41. Speech recognition, also referred to as speech-to-text or voice recognition, is technology that recognizes speech, allowing voice to serve as the "main interface between the human and the computer"i.[14]
  42. If you haven't used speech recognition with your students lately, it may be time to take another look.[14]
  43. Other applications include speech recognition for foreign language learning,iv voice activated products for the blind,v and many familiar mainstream technologies.[14]
  44. Writing production For students with learning disabilities, speech recognition technology can encourage writing that is more thoughtful and deliberateviii.[14]
  45. Omilia has solved this problem by training our recognition models with real world call center audio to optimize the language and acoustic models of our ASR engine.[15]
  46. With this personalized approach to speech recognition Omilia reached unprecedented accuracy in speech to text transcription.[15]
  47. Amazon Transcribe makes it easy for developers to add speech to text capabilities to their applications.[16]
  48. Amazon Transcribe uses a deep learning process called automatic speech recognition (ASR) to convert speech to text quickly and accurately.[16]
  49. Use Voice Recognition to fill out forms and dictate email with speech to text.[17]
  50. Dictate emails with speech to text![17]
  51. Speech Recognition Anywhere now includes text to speech, custom voice commands and scripting.[17]
  52. If speech recognition is not working on a specific website then you can try (1) refresh the web page or (2) restart your computer.[17]
  53. With Sestek Speech Recognition, machines and applications can understand user commands in spoken language.[18]
  54. Speech recognition software is a computer program that types words as you speak them into a microphone.[19]
  55. Yes – speech recognition programs come pre-loaded with many commands that allow the user to open and close programs, change some settings, move the cursor and click on links.[19]
  56. To use speech recognition software you need to have clear speech.[19]
  57. Computers come with built in speech recognition software.[19]
  58. The new JavaScript Web Speech API makes it easy to add speech recognition to your web pages.[20]
  59. This API allows fine control and flexibility over the speech recognition capabilities in Chrome version 25 and later.[20]
  60. The default value for continuous is false, meaning that when the user stops talking, speech recognition will end.[20]
  61. Voice and speech recognition is witnessing high demand in the healthcare sector owing to a rise in usage of voice command to record the patient’s details through voice.[21]
  62. The voice and speech recognition is also used in the R&D center and medical labs to check the authenticity of the employee and also they make sure no clinical data is breached.[21]
  63. Based on function, the global speech and voice recognition market is segmented into speech recognition and voice recognition.[21]
  64. The voice and speech recognition market is limited in these regions due to the poor IT and telecom infrastructure.[21]
  65. Note: To start an ASR session, tap the Push-to-talk tab on the taskbar, then wait for the audible cue before you say a command.[22]
  66. You can use the search module settings in the /etc/asr-car.cfg file to define keys (synonyms) for the supported speech commands.[22]
  67. Several factors affect the latencies of voice-command recognition: End of Speech (EOS) detection Too much ambient noise may prevent the ASR service from detecting EOS.[22]
  68. You can change this setting in the /etc/asr-car.cfg file.[22]
  69. Speech recognition software allows users to control their computers with their voice rather than, or in addition to, a mouse or keyboard.[23]
  70. Widows Speech Recognition for Windows 10 is a feature that gives access to most computer features with the use of voice.[23]
  71. Using Windows Speech Recognition and Cortana is a low-cost solution.[23]
  72. Together with OpenVINO™-based neural-network speech recognition, these libraries provide an end-to-end pipeline converting speech to text.[24]
  73. Note that the OpenVINO™ package also includes an automatic speech recognition sample demonstrating acoustic model inference based on Kaldi* neural networks.[24]
  74. However, the Speech Library and speech recognition demos do not require the GNA accelerator.[24]
  75. Then you can use new models in the live speech recognition demo.[24]

소스

메타데이터

위키데이터

Spacy 패턴 목록

  • [{'LOWER': 'speech'}, {'LEMMA': 'recognition'}]
  • [{'LOWER': 'automatic'}, {'LOWER': 'speech'}, {'LEMMA': 'recognition'}]
  • [{'LEMMA': 'ASR'}]
  • [{'LOWER': 'computer'}, {'LOWER': 'speech'}, {'LEMMA': 'recognition'}]
  • [{'LEMMA': 'STT'}]
  • [{'LOWER': 'speech'}, {'LOWER': 'to'}, {'LEMMA': 'text'}]