Add a way to use higher precision in an arithmetic operation
Mentioned by James in theano-dev (thread: Inconsistent behavior in integer divisions):
'tensor.div(a,b,dtype='float64') might also be a good thing to have if you want to specify it. We could also make an optimization that replaces cast(div(a,b)) with a div with a built-in cast.'
I think such ideas may extend to other arithmetic operations as well, when we want the result to be of higher precision than the inputs.
'tensor.div(a,b,dtype='float64') might also be a good thing to have if you want to specify it. We could also make an optimization that replaces cast(div(a,b)) with a div with a built-in cast.'
I think such ideas may extend to other arithmetic operations as well, when we want the result to be of higher precision than the inputs.
Leave a comment
on 2011-05-11 08:47 *
By Olivier Delalleau
Note: this is also related to ticket 485