public class LogConditionalObjectiveFunction<L,F> extends AbstractStochasticCachingDiffUpdateFunction
AbstractStochasticCachingDiffFunction.SamplingMethod| Modifier and Type | Field and Description |
|---|---|
protected int[][] |
data |
protected java.lang.Iterable<Datum<L,F>> |
dataIterable |
protected float[] |
dataweights |
protected DoubleAD[] |
derivativeAD |
protected double[] |
derivativeNumerator |
protected Index<F> |
featureIndex |
protected Index<L> |
labelIndex |
protected int[] |
labels |
protected int |
numClasses |
protected int |
numFeatures |
protected LogPrior |
prior |
protected double[] |
priorDerivative |
protected DoubleAD[] |
probs |
protected DoubleAD[] |
sums |
protected boolean |
useIterable |
protected boolean |
useSummedConditionalLikelihood |
protected double[][] |
values |
protected DoubleAD[] |
xAD |
skipValCalcallIndices, curElement, finiteDifferenceStepSize, gradPerturbed, hasNewVals, HdotV, lastBatch, lastBatchSize, lastElement, lastVBatch, lastXBatch, method, randGenerator, recalculatePrevBatch, returnPreviousValues, sampleMethod, scaleUp, thisBatch, xPerturbedderivative, generator, value| Constructor and Description |
|---|
LogConditionalObjectiveFunction(GeneralDataset<L,F> dataset) |
LogConditionalObjectiveFunction(GeneralDataset<L,F> dataset,
float[] dataWeights,
LogPrior prior) |
LogConditionalObjectiveFunction(GeneralDataset<L,F> dataset,
LogPrior prior) |
LogConditionalObjectiveFunction(GeneralDataset<L,F> dataset,
LogPrior prior,
boolean useSumCondObjFun) |
LogConditionalObjectiveFunction(int numFeatures,
int numClasses,
int[][] data,
double[][] values,
int[] labels,
int intPrior,
double sigma,
double epsilon) |
LogConditionalObjectiveFunction(int numFeatures,
int numClasses,
int[][] data,
int[] labels) |
LogConditionalObjectiveFunction(int numFeatures,
int numClasses,
int[][] data,
int[] labels,
boolean useSumCondObjFun) |
LogConditionalObjectiveFunction(int numFeatures,
int numClasses,
int[][] data,
int[] labels,
float[] dataweights) |
LogConditionalObjectiveFunction(int numFeatures,
int numClasses,
int[][] data,
int[] labels,
float[] dataweights,
LogPrior prior) |
LogConditionalObjectiveFunction(int numFeatures,
int numClasses,
int[][] data,
int[] labels,
int intPrior,
double sigma,
double epsilon) |
LogConditionalObjectiveFunction(int numFeatures,
int numClasses,
int[][] data,
int[] labels,
LogPrior prior) |
LogConditionalObjectiveFunction(java.lang.Iterable<Datum<L,F>> dataIterable,
LogPrior logPrior,
Index<F> featureIndex,
Index<L> labelIndex) |
| Modifier and Type | Method and Description |
|---|---|
protected void |
calculate(double[] x)
Calculate the conditional likelihood.
|
void |
calculateStochastic(double[] x,
double[] v,
int[] batch)
calculateStochastic needs to calculate a stochastic approximation to the derivative and value of
of a function for a given batch of the data.
|
protected void |
calculateStochasticAlgorithmicDifferentiation(double[] x,
double[] v,
int[] batch) |
void |
calculateStochasticFiniteDifference(double[] x,
double[] v,
double h,
int[] batch) |
void |
calculateStochasticGradient(double[] x,
int[] batch)
Performs stochastic gradient calculation based
on samples indexed by batch and do not apply regularization.
|
void |
calculateStochasticGradientLocal(double[] x,
int[] batch) |
double |
calculateStochasticUpdate(double[] x,
double xscale,
int[] batch,
double gain)
Performs stochastic update of weights x (scaled by xScale) based
on samples indexed by batch.
|
int |
dataDimension()
Data dimension must return the size of the data used by the function.
|
int |
domainDimension()
Returns the number of dimensions in the function's domain
|
protected int |
indexOf(int f,
int c) |
protected void |
rvfcalculate(double[] x)
Calculate conditional likelihood for datasets with real-valued features.
|
void |
setPrior(LogPrior prior) |
void |
setUseSumCondObjFun(boolean value) |
double[][] |
to2D(double[] x) |
double |
valueAt(double[] x,
double xscale,
int[] batch)
Computes value of function for specified value of x (scaled by xScale)
only over samples indexed by batch.
|
calculateStochasticGradient, calculateStochasticUpdate, getSample, valueAtclearCache, decrementBatch, derivativeAt, derivativeAt, getBatch, HdotVAt, HdotVAt, HdotVAt, incrementBatch, incrementRandom, initial, lastDerivative, lastValue, scaleUp, valueAt, valueAtcopy, derivativeAt, ensure, getDerivative, gradientCheck, gradientCheck, randomInitial, valueAtprotected LogPrior prior
protected int numFeatures
protected int numClasses
protected int[][] data
protected double[][] values
protected int[] labels
protected float[] dataweights
protected double[] derivativeNumerator
protected DoubleAD[] xAD
protected double[] priorDerivative
protected DoubleAD[] derivativeAD
protected DoubleAD[] sums
protected DoubleAD[] probs
protected boolean useIterable
protected boolean useSummedConditionalLikelihood
public LogConditionalObjectiveFunction(GeneralDataset<L,F> dataset)
public LogConditionalObjectiveFunction(GeneralDataset<L,F> dataset, LogPrior prior)
public LogConditionalObjectiveFunction(GeneralDataset<L,F> dataset, float[] dataWeights, LogPrior prior)
public LogConditionalObjectiveFunction(GeneralDataset<L,F> dataset, LogPrior prior, boolean useSumCondObjFun)
public LogConditionalObjectiveFunction(java.lang.Iterable<Datum<L,F>> dataIterable, LogPrior logPrior, Index<F> featureIndex, Index<L> labelIndex)
public LogConditionalObjectiveFunction(int numFeatures,
int numClasses,
int[][] data,
int[] labels,
boolean useSumCondObjFun)
public LogConditionalObjectiveFunction(int numFeatures,
int numClasses,
int[][] data,
int[] labels)
public LogConditionalObjectiveFunction(int numFeatures,
int numClasses,
int[][] data,
int[] labels,
LogPrior prior)
public LogConditionalObjectiveFunction(int numFeatures,
int numClasses,
int[][] data,
int[] labels,
float[] dataweights)
public LogConditionalObjectiveFunction(int numFeatures,
int numClasses,
int[][] data,
int[] labels,
float[] dataweights,
LogPrior prior)
public LogConditionalObjectiveFunction(int numFeatures,
int numClasses,
int[][] data,
int[] labels,
int intPrior,
double sigma,
double epsilon)
public LogConditionalObjectiveFunction(int numFeatures,
int numClasses,
int[][] data,
double[][] values,
int[] labels,
int intPrior,
double sigma,
double epsilon)
public void setPrior(LogPrior prior)
public int domainDimension()
Functionpublic int dataDimension()
AbstractStochasticCachingDiffFunctiondataDimension in class AbstractStochasticCachingDiffFunctionprotected int indexOf(int f,
int c)
public double[][] to2D(double[] x)
protected void calculate(double[] x)
useSummedConditionalLikelihood is false (the default),
this calculates standard(product) CL, otherwise this calculates summed CL.
What's the difference? See Klein and Manning's 2002 EMNLP paper.calculate in class AbstractCachingDiffFunctionx - The point at which to calculate the functionpublic void calculateStochastic(double[] x,
double[] v,
int[] batch)
AbstractStochasticCachingDiffFunction derivative , the approximation to the value in value
and the approximation to the Hessian vector product H.v in the array HdotV . Note
that the hessian vector product is used primarily with the Stochastic Meta Descent optimization
routine SMDMinimizer .
Important: The stochastic approximation must be such that the sum of all stochastic calculations over
each of the batches in the data must equal the full calculation. i.e. for a data set of size 100
the sum of the gradients for batches 1-10 , 11-20 , 21-30 .... 91-100 must be the same as the gradient
for the full calculation (at the very least in expectation). Be sure to take into account the priors.calculateStochastic in class AbstractStochasticCachingDiffFunctionx - - value to evaluate atv - - the vector for the Hessian vector product H.vbatch - - an array containing the indices of the data to use in the calculation, this array is being calculated
internal to the abstract, and only needs to be handled not generated by the implementation.public void calculateStochasticFiniteDifference(double[] x,
double[] v,
double h,
int[] batch)
public void calculateStochasticGradientLocal(double[] x,
int[] batch)
public double valueAt(double[] x,
double xscale,
int[] batch)
AbstractStochasticCachingDiffUpdateFunctionvalueAt in class AbstractStochasticCachingDiffUpdateFunctionx - unscaled weightsxscale - how much to scale x by when performing calculationsbatch - indices of which samples to compute function overpublic double calculateStochasticUpdate(double[] x,
double xscale,
int[] batch,
double gain)
AbstractStochasticCachingDiffUpdateFunctioncalculateStochasticUpdate in class AbstractStochasticCachingDiffUpdateFunctionx - unscaled weightsxscale - how much to scale x by when performing calculationsbatch - indices of which samples to compute function overgain - how much to scale adjustments to xpublic void calculateStochasticGradient(double[] x,
int[] batch)
AbstractStochasticCachingDiffUpdateFunctioncalculateStochasticGradient in class AbstractStochasticCachingDiffUpdateFunctionx - unscaled weightsbatch - indices of which samples to compute function overprotected void calculateStochasticAlgorithmicDifferentiation(double[] x,
double[] v,
int[] batch)
protected void rvfcalculate(double[] x)
public void setUseSumCondObjFun(boolean value)