tf.cast not changing the dtype ORIGINAL ISSUE:tensorflowjs Error: Argument 'x' passed to 'conv2d' must be float32 tensor, but got int32 tensor

I'm trying to load a model I develop in tensorflow (Python) with tensorflowjs and make prediction for an input test, as follow:

tf_model = await tf.loadGraphModel('http://localhost:8080/tf_models/models_js/model/model.json')
let test_output = await tf_model.predict(tf.tensor2d([0.0, -1.0, 1.0, -1.0, 0.0, 0.0, 0.0, 0.0, 0.0], [1, 9], 'float32'))
console.log("[Test tf model]:", test_output.arraySync())

I'm getting this error in the js console at tf_model.predict

Error: Argument 'x' passed to 'conv2d' must be float32 tensor, but got int32 tensor

even if the input of the Conv2D Layer is of type float32 in the model definition


inputs = tf.keras.layers.Input((9))

# One-Hot encoding
x = tf.cast(tf.one_hot(tf.cast(inputs + 1, tf.int32), 3), tf.float32)

x = tf.reshape(x, (-1, 3, 3, 3))
x = tf.keras.layers.Conv2D(
        filters=3**5, kernel_size=(3, 3), kernel_regularizer=kernel_regularizer
    )(x)

Anybody knows why this could happen?

EDIT: It seems tf.cast does not change the type, if I run

print(tf.shape(inputs))
x = tf.cast(tf.one_hot(tf.cast(inputs + 1, tf.int32), 3), tf.float32)
print(tf.shape(x)

I keep getting tf.int32

KerasTensor(type_spec=TensorSpec(shape=(2,), dtype=tf.int32, name=None), inferred_value=[None, 9], name='tf.compat.v1.shape_12/Shape:0', description="created by layer 'tf.compat.v1.shape_12'")
KerasTensor(type_spec=TensorSpec(shape=(3,), dtype=tf.int32, name=None), inferred_value=[None, 9, 3], name='tf.compat.v1.shape_13/Shape:0', description="created by layer 'tf.compat.v1.shape_13'")

???



Comments

Popular posts from this blog

Today Walkin 14th-Sept

Spring Elasticsearch Operations

Hibernate Search - Elasticsearch with JSON manipulation