constant_folding optimization could cause trouble with stabilisation optimization
Hi,
The constant optimization as it currently appear before the stabilization optimizations could make those optimizations not being applied when needed. This will make code having constant not return the same result as if you pass them as input. We need to make the constant optimization happen after the stabilize phase. James say that it should happen before the specialization as this phase look for constant frequently.
This case was not found in real code but would reproduce this error:
import theano
import theano.tensor as T
x = T.as_tensor_variable(800)
y = T.log(1+T.exp(x))
f = theano.function([],y)
print f.maker.env.toposort()* return []
print f()* return inf
x2 = T.scalar()
y2 = T.log(1+T.exp(x2))
f2 = theano.function([x2],y2)
print f2.maker.env.toposort()* return [softplus()]
print f2(800)* return 800
also see ticket http://trac-hg.assembla.com/theano/ticket/503 that is related to constant optimization.
The constant optimization as it currently appear before the stabilization optimizations could make those optimizations not being applied when needed. This will make code having constant not return the same result as if you pass them as input. We need to make the constant optimization happen after the stabilize phase. James say that it should happen before the specialization as this phase look for constant frequently.
This case was not found in real code but would reproduce this error:
import theano
import theano.tensor as T
x = T.as_tensor_variable(800)
y = T.log(1+T.exp(x))
f = theano.function([],y)
print f.maker.env.toposort()* return []
print f()* return inf
x2 = T.scalar()
y2 = T.log(1+T.exp(x2))
f2 = theano.function([x2],y2)
print f2.maker.env.toposort()* return [softplus()]
print f2(800)* return 800
also see ticket http://trac-hg.assembla.com/theano/ticket/503 that is related to constant optimization.
Leave a comment