"초평면"의 두 판 사이의 차이
둘러보기로 가기
검색하러 가기
Pythagoras0 (토론 | 기여) (→노트: 새 문단) |
Pythagoras0 (토론 | 기여) (→메타데이터: 새 문단) |
||
| 29번째 줄: | 29번째 줄: | ||
===소스=== | ===소스=== | ||
<references /> | <references /> | ||
| + | |||
| + | == 메타데이터 == | ||
| + | |||
| + | ===위키데이터=== | ||
| + | * ID : [https://www.wikidata.org/wiki/Q657586 Q657586] | ||
2020년 12월 26일 (토) 05:24 판
노트
위키데이터
- ID : Q657586
말뭉치
- This family will be stacked along the unique vector (up to sign) that is perpendicular to the original hyperplane.[1]
- In a vector space, a vector hyperplane is a subspace of codimension 1, only possibly shifted from the origin by a vector, in which case it is referred to as a flat.[2]
- An affine hyperplane together with the associated points at infinity forms a projective hyperplane.[2]
- In projective space, a hyperplane does not divide the space into two parts; rather, it takes two hyperplanes to separate points and divide up the space.[2]
- Hyperplane arrangements are well-investigated objects, concerning their combinatorial as well as their algorithmic complexity; see Edelsbrunner et al.[3]
- If the \(k\) points do not uniquely define a \((k\!-\!1)\)-dimensional hyperplane (i.e. they lie on a \((k\!-\!2)\)-dimensional hyperplane), a vector containing zeros is returned.[4]
- The Perceptron guaranteed that you find a hyperplane if it exists.[5]
- Support vectors are special because they are the training points that define the maximum margin of the hyperplane to the data set and they therefore determine the shape of the hyperplane.[5]
- If you were to move one of them and retrain the SVM, the resulting hyperplane would change.[5]
- If the data is low dimensional it is often the case that there is no separating hyperplane between the two classes.[5]
- From these properties, we conclude that the sphere s is a point lying on the hyperplane, but outside the null cone.[6]
- In the next section, we will formulate a constrained optimization problem to determine the least squares fit of a hyperplane to uncertain data.[6]
- This adds up to the full preimage of a hyperplane, and the result is zero.[6]
- This immediately provides an indication of the sensitivity of the objective functions to variation of the parameter values in each hyperplane.[6]
- For example, if you take the 3D space then hyperplane is a geometric entity that is 1 dimensionless.[7]
- So, here we have a 2-dimensional space in X 1 and X 2 and as we have discussed before, an equation in two dimensions would be a line which would be a hyperplane.[7]
- Let’s consider the same example that we have taken in hyperplane case.[7]
- So we can say that this point is on the hyperplane of the line.[7]
- A linear expression, for example, \(3x+3y-5z-7\) stands for the hyperplane with the equation \(x+3y-5z=7\).[8]
- At the end of Part 2 we computed the distance between a point and a hyperplane.[9]
- You can also see the optimal hyperplane on Figure 2.[9]
- If I have an hyperplane I can compute its margin with respect to some data point.[9]
- When we see that the point is on the hyperplane so and the constraint is respected.[9]
소스
- ↑ Hyperplane
- ↑ 2.0 2.1 2.2 Hyperplane
- ↑ Hyperplane - an overview
- ↑ hyperplane function
- ↑ 5.0 5.1 5.2 5.3 Lecture 9: SVM
- ↑ 6.0 6.1 6.2 6.3 Example sentences
- ↑ 7.0 7.1 7.2 7.3 Hyperplane, Subspace and Halfspace
- ↑ Hyperplanes — Sage 9.2 Reference Manual: Combinatorial and Discrete Geometry
- ↑ 9.0 9.1 9.2 9.3 SVM – Understanding the math – the optimal hyperplane – SVM Tutorial
메타데이터
위키데이터
- ID : Q657586