One Hot Encoding with Values Less Than 1?

This may be a naive question, still relatively new to the ML world, and definitely not strong with the math, but, I’m wondering if I can change the influence of a one hot encoded input by adjusting the values to be less than one? I’m working in JS and I’m trying to accomplish what, I believe, sampleWeight is supposed to do. Unfortunately, it doesn’t look like sampleWeight has been implemented, see thread, sampleWeight not supported in tfjs? - #5 by sphere.

As an example, assuming ReLU hidden layers and a softmax output layer, say I’ve got an input season; winter, spring, summer, and fall, that’s represented as one hot vectors:

[ 1, 0, 0, 0 ] - Winter
[ 0, 1, 0, 0 ] - Spring
[ 0, 0, 1, 0 ] - Summer
[ 0, 0, 0, 1 ] - Fall

If I wanted a particular input to have less influence on training the model can I simply adjust the value?

[ 0, 0.5, 0, 0 ]

Is this going to have unintended consequences?

Looks like classWeight does something similar as well but, I don’t want the weights to be specific to the class, instead, I want to weight a particular set of inputs.


edit: looks like classWeight affects the output, not the input? Not sure it’s quite the same.

It’s fine to adjust those values - they are numeric after all, and your model likely doesn’t have special treatment for 1 and 0, so 0.5 will just be half of a 1. One-hot is just a way to represent a categorical feature. It’s not uncommon to see “multi hot” (multiple 1s), or probability values (as in your softmax layer).

I’m not sure whether this is the best option for your case though. If your goal is to limit influence, then you want to control how much impact the loss function has when back-propagating throughout the whole network. Sample weight and class weight do this for you.

As a counter-example, if your model was learning that Spring was an important feature, even when the value was 0.5, it may end up learning 2x scale weights. You control for this by down-weighting during back prop.

Thanks for the response @macd! It doesn’t sound like adjusting the input data is going to do what I need. Looking at it another way, and to confirm my understanding, training with two samples with 0.5 values won’t have the same effect as training with a single sample at 1.0?

[[ 0, 0.5, 0, 0 ],
[ 0, 0.5, 0, 0 ]]

Is different than:

[[ 0, 1, 0, 0 ]]

Maybe that’s a naive way of looking at it? Any idea when sampleWeight will be implemented for TFJS? I did peek at the the PY training code to see if I could decipher how sampleWeight was working but, my python skills are, well, I don’t have any.