Monday, February 25, 2019

Impact of scaling and shifting random variables


To make training the network easier, we standardize each of the continuous variables. That is, we'll shift and scale the variables such that they have zero mean and a standard deviation of 1.
The scaling factors are saved so we can go backwards when we use the network for predictions.

SHIFTING
If we have one random variable, that is constructed by adding a constant to another random variable
  • We would shift the mean by that constant
  • It would not shift the standard deviation
SCALING
If we try to scale a random variable by multiplying a constant then we get a random variable
  • It would effect both standard deviation and mean

3 comments:

  1. Replies
    1. Scaling and shifting are fundamental operations performed on random variables that significantly impact their behavior and the information they convey. Here's a breakdown of their individual effects:

      Machine Learning Final Year Projects


      Scaling a Random Variable:

      Multiplying by a constant (k): When you multiply a random variable X by a constant k, it affects both the mean (average) and standard deviation:
      Mean: The new mean becomes k * μ (mu), where μ is the original mean of X.
      If k is positive (e.g., multiplying by 2), the mean is scaled proportionally in the same direction.
      If k is negative (e.g., multiplying by -2), the mean is scaled in the opposite direction and flips sign.
      Standard Deviation: The standard deviation (σ) is also scaled by the absolute value of k: new standard deviation = |k| * σ.

      Deep Learning Projects for Final Year

      The standard deviation increases proportionally with the absolute value of k.

      Delete
  2. Pretty good post. I just stumbled upon your blog and wanted to say that I have really enjoyed reading your blog posts. Any way I’ll be subscribing to your feed and I hope you post again soon. The Random Blogger

    ReplyDelete