In the midst of writing my semi-neural net compressor I have introduced a scaling function based off the premise of an Activation function for a neural net – stretch and squash. Because I always work within a normalized domain of 0.0 to 1.0 I was looking for a logarithmic type function that would work similar to a sigmoid to bring my probabilities closer to a best guess or at least a better prediction.
Below is a graph showing a hermite basis function of a Cubic Hermit Spline , which acts as a sigmoid in the domain of 0.0 to 1.0. The function is very simple and results are phenomenal. In comparison I show a linear curve, of course with no scaling and a hyperbolic tangent that obviously is skewed to fit within the domain 0.0 to 1.0. I have also tested other functions such as log sigmoid and double exponential sigmoid with decent results. This simple function results in a logarithmic curve that seems to produce amazing results. Of course outside the domain I use, the function is definitely a fake sigmoid.
The formula is very basic: p(x) = -2x^3+3x^2
* ^ = exponent