Reference

Types

LeastSquaresSVM.LSSVCType
LSSVC <: SVM
LSSVC(; kernel=:rbf, γ=1.0, σ=1.0, degree=0)

The type to hold a Least Squares Support Vector Classifier.

Fields

  • kernel::Symbol: The kind of kernel to use for the non-linear mapping of the data. Can be one of the following: :rbf, :linear, or :poly.
  • γ::Float64: The gamma hyperparameter that is intrinsic of the Least Squares version of the Support Vector Machines.
  • σ::Float64: The hyperparameter for the RBF kernel.
  • degree::Int: The degree of the polynomial kernel. Only used if kernel is :poly.
source
LeastSquaresSVM.LSSVRType
LSSVR <: SVM
LSSVR(; kernel=:rbf, γ=1.0, σ=1.0, degree=0)

The type to hold a Least Squares Support Vector Regressor.

Fields

  • kernel::Symbol: The kind of kernel to use for the non-linear mapping of the data. Can be one of the following: :rbf, :linear, or :poly.
  • γ::Float64: The gamma hyperparameter that is intrinsic of the Least Squares version of the Support Vector Machines.
  • σ::Float64: The hyperparameter for the RBF kernel.
  • degree::Int: The degree of the polynomial kernel. Only used if kernel is :poly.
source
LeastSquaresSVM.SVMType
SVM

A super type for both classifiers and regressors that are implemented as Support Vector Machines.

source

Methods

LeastSquaresSVM.svmpredictMethod
svmpredict(svm::LSSVC, fits, xnew::AbstractMatrix) -> AbstractArray

Uses the information obtained from svmtrain such as the bias and weights to construct a decision function and predict new class values. For the classification problem only.

Arguments

  • svm::LSSVC: The Support Vector Machine that contains the hyperparameters, as well as the kernel to be used.
  • fits: It can be any container data structure but it must have four elements: x, the data matrix; y, the labels vector; α, the weights; and b, the bias.
  • xnew::AbstractMatrix: The data matrix that contains the new instances to be predicted.

Returns

  • Array: The labels corresponding to the prediction to each of the instances in xnew.
source
LeastSquaresSVM.svmpredictMethod
svmpredict(svm::LSSVR, fits, xnew::AbstractMatrix) -> AbstractArray

Uses the information obtained from svmtrain such as the bias and weights to construct a decision function and predict the new values of the function. For the regression problem only.

Arguments

  • svm::LSSVR: The Support Vector Machine that contains the hyperparameters, as well as the kernel to be used.
  • fits: It can be any container data structure but it must have four elements: x, the data matrix; y, the labels vector; α, the weights; and b, the bias.
  • xnew::AbstractMatrix: The data matrix that contains the new instances to be predicted.

Returns

  • Array: The labels corresponding to the prediction to each of the instances in xnew.
source
LeastSquaresSVM.svmtrainMethod
svmtrain(svm::LSSVC, x::AbstractMatrix, y::AbstractVector) -> Tuple

Solves a Least Squares Support Vector classification problem using the Conjugate Gradient method. In particular, it uses the Lanczos process due to the fact that the matrices are symmetric.

Arguments

  • svm::LSSVC: The Support Vector Machine that contains the hyperparameters, as well as the kernel to be used.
  • x::AbstractMatrix: The data matrix with the features. It is expected that this array is already standardized, i.e. the mean for each feature is zero and its standard deviation is one.
  • y::AbstractVector: A vector that contains the classes. It is expected that there are only two classes, -1 and 1.

Returns

  • Tuple: A tuple containing x, y and the following two elements:
  • b: Contains the bias for the decision function.
  • α: Contains the weights for the decision function.
source
LeastSquaresSVM.svmtrainMethod
svmtrain(svm::LSSVR, x::AbstractMatrix, y::AbstractVector) -> Tuple

Solves a Least Squares Support Vector regression problem using the Conjugate Gradient method. In particular, it uses the Lanczos process due to the fact that the matrices are symmetric.

Arguments

  • svm::LSSVR: The Support Vector Machine that contains the hyperparameters, as well as the kernel to be used.
  • x::AbstractMatrix: The data matrix with the features. It is expected that this array is already standardized, i.e. the mean for each feature is zero and its standard deviation is one.
  • y::AbstractVector: A vector that contains the continuous value of the function estimation. The elements can be any subtype of Real.

Returns

  • Tuple: A tuple containing x and the following two elements:
  • b: Contains the bias for the decision function.
  • α: Contains the weights for the decision function.
source