"Finite state transducer"의 두 판 사이의 차이

수학노트
둘러보기로 가기 검색하러 가기
(→‎노트: 새 문단)
 
(→‎메타데이터: 새 문단)
23번째 줄: 23번째 줄:
 
===소스===
 
===소스===
 
  <references />
 
  <references />
 +
 +
== 메타데이터 ==
 +
 +
===위키데이터===
 +
* ID :  [https://www.wikidata.org/wiki/Q2166395 Q2166395]

2020년 12월 26일 (토) 04:14 판

노트

위키데이터

말뭉치

  1. We present a Weighted Finite State Transducer Translation Template Model for statistical machine translation.[1]
  2. The approach we describe allows us to implement each constituent distribution of the model as a weighted finite state transducer or acceptor.[1]
  3. A finite state transducer essentially is a finite state automaton that works on two (or more) tapes.[2]
  4. A finite-state transducer (FST) is a finite-state machine with two memory tapes, following the terminology for Turing machines: an input tape and an output tape.[3]
  5. * that can be implemented as finite-state transducers are called rational relations.[3]
  6. Finite-state transducers are often used for phonological and morphological analysis in natural language processing research and applications.[3]
  7. Finite State Transducers can be weighted, where each transition is labelled with a weight in addition to the input and output labels.[3]
  8. In this article, we apply a hierarchical pipeline concept that composes Weighted Finite-State Transducers (WFST) together.[4]
  9. Weighted Finite-State Transducers WFST is good at modeling HMM and solving state machine problems.[4]
  10. A finite-state transducer (FST) has arcs labeling the input and output labels.[4]
  11. We can represent a rewrite rule as a regular relation and thus we can build a corresponding finite-state transducer.[5]
  12. So far we’ve learned that each rewrite rule is a binary regular string relation and that these relations can be represented by finite-state transducers.[5]
  13. We’ve seen how the rewrite rules can be represented as regular string relations which on the other hand have an equivalent formalism namely the finite-state transducers.[5]
  14. We introduce a framework for automatic differentiation with weighted finite-state transducers (WFSTs) allowing them to be used dynamically at training time.[6]
  15. One of the key ideas in this technology is to separate processing into several stages, in ``cascaded finite-state transducers.[7]
  16. In a finite-state transducer, an output entity is constructed when final states are reached, e.g., a representation of the information in a phrase.[7]
  17. In a cascaded finite-state transducer, there are different finite-state transducers at different stages.[7]

소스

메타데이터

위키데이터