Keras ValueError: For a `build()` method with more than one argument, all arguments should have a `_shape` suffix and match an argument from `call()`

Hello,

I’m encountering this error when attempting to run the minimal working example on current github readme.

For troubleshooting, I’ve attempted bayesflow 2.0.7 and 2.0.6, jax GPU, jax CPU, tensorflow CPU backends, and downgrading keras to 3.9.0, with no luck. Would be happy to try any suggestions you may have.

Install/setup details:

  • Running from linux (RHEL 9.2 on university’s hpc)
  • uv for install/package management

Install:

uv venv test-bf
source test-bf/bin/activate
uv pip install bayesflow jax
Using Python 3.11.0 environment at: test-bf
Resolved 32 packages in 54ms
Installed 32 packages in 29.25s
 + absl-py==2.3.1
 + bayesflow==2.0.7
 + contourpy==1.3.3
 + cycler==0.12.1
 + fonttools==4.60.1
 + h5py==3.15.1
 + jax==0.7.1
 + jaxlib==0.7.1
 + keras==3.12.0
 + kiwisolver==1.4.9
 + markdown-it-py==4.0.0
 + matplotlib==3.10.7
 + mdurl==0.1.2
 + ml-dtypes==0.5.3
 + namex==0.1.0
 + numpy==1.26.4
 + opt-einsum==3.4.0
 + optree==0.17.0
 + packaging==25.0
 + pandas==2.3.3
 + pillow==12.0.0
 + pygments==2.19.2
 + pyparsing==3.2.5
 + python-dateutil==2.9.0.post0
 + pytz==2025.2
 + rich==14.2.0
 + scipy==1.16.3
 + seaborn==0.13.2
 + six==1.17.0
 + tqdm==4.67.1
 + typing-extensions==4.15.0
 + tzdata==2025.2

Error:

import bayesflow as bf
INFO:2025-10-31 11:57:26,010:jax._src.xla_bridge:822: Unable to initialize backend 'tpu': INTERNAL: Failed to open libtpu.so: libtpu.so: cannot open shared object file: No such file or directory
INFO:jax._src.xla_bridge:Unable to initialize backend 'tpu': INTERNAL: Failed to open libtpu.so: libtpu.so: cannot open shared object file: No such file or directory
INFO:bayesflow:Using backend 'jax'
workflow = bf.BasicWorkflow(
     inference_network=bf.networks.FlowMatching(),
     summary_network=bf.networks.TimeSeriesTransformer(),
     inference_variables=["parameters"],
     summary_variables=["observables"],
     simulator=bf.simulators.SIR()
)
history = workflow.fit_online(epochs=50, batch_size=32, num_batches_per_epoch=500)
INFO:bayesflow:Fitting on dataset instance of OnlineDataset.
INFO:bayesflow:Building on a test batch.
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/oscar/data/dbadre/mfreund/prep/test-bf/lib/python3.11/site-packages/bayesflow/workflows/basic_workflow.py", line 789, in fit_online
    return self._fit(
           ^^^^^^^^^^
  File "/oscar/data/dbadre/mfreund/prep/test-bf/lib/python3.11/site-packages/bayesflow/workflows/basic_workflow.py", line 966, in _fit
    self.history = self.approximator.fit(
                   ^^^^^^^^^^^^^^^^^^^^^^
  File "/oscar/data/dbadre/mfreund/prep/test-bf/lib/python3.11/site-packages/bayesflow/approximators/continuous_approximator.py", line 315, in fit
    return super().fit(*args, **kwargs, adapter=self.adapter)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/oscar/data/dbadre/mfreund/prep/test-bf/lib/python3.11/site-packages/bayesflow/approximators/approximator.py", line 137, in fit
    self.build(mock_data_shapes)
  File "/oscar/data/dbadre/mfreund/prep/test-bf/lib/python3.11/site-packages/keras/src/layers/layer.py", line 231, in build_wrapper
    original_build_method(*args, **kwargs)
  File "/oscar/data/dbadre/mfreund/prep/test-bf/lib/python3.11/site-packages/bayesflow/approximators/continuous_approximator.py", line 79, in build
    self.summary_network.build(data_shapes["summary_variables"])
  File "/oscar/data/dbadre/mfreund/prep/test-bf/lib/python3.11/site-packages/keras/src/layers/layer.py", line 231, in build_wrapper
    original_build_method(*args, **kwargs)
  File "/oscar/data/dbadre/mfreund/prep/test-bf/lib/python3.11/site-packages/bayesflow/utils/decorators.py", line 95, in wrapper
    return fn(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^
  File "/oscar/data/dbadre/mfreund/prep/test-bf/lib/python3.11/site-packages/bayesflow/networks/summary_network.py", line 18, in build
    z = self.call(x)
        ^^^^^^^^^^^^
  File "/oscar/data/dbadre/mfreund/prep/test-bf/lib/python3.11/site-packages/bayesflow/networks/transformers/time_series_transformer.py", line 144, in call
    inp = layer(inp, inp, training=training, **kwargs)
          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/oscar/data/dbadre/mfreund/prep/test-bf/lib/python3.11/site-packages/keras/src/utils/traceback_utils.py", line 122, in error_handler
    raise e.with_traceback(filtered_tb) from None
  File "/oscar/data/dbadre/mfreund/prep/test-bf/lib/python3.11/site-packages/keras/src/layers/layer.py", line 1970, in update_shapes_dict_for_target_fn
    raise ValueError(
ValueError: For a `build()` method with more than one argument, all arguments should have a `_shape` suffix and match an argument from `call()`. E.g. `build(self, foo_shape, bar_shape)`  For layer 'MultiHeadAttentionBlock', Received `build()` argument `self`, which does not end in `_shape`.

Potentially related to:

What Python version are you using? Is it also 3.10.x?
PS. I can see the uv output as 3.11, but am unable to reproduce. Does the dev version have the same problem?

Indeed I’m using 3.11.0. No luck with dev version of bayesflow, unfortunately, I’m receiving the same error as above. (I installed in new env via uv pip install -U jax git+https://github.com/bayesflow-org/bayesflow.git@dev)

We are trying to get to the bottom of this, but the issue is that we cannot reproduce the error on any of the keras 3.11.x or 3.12.x versions. The mysterious thing about the error is that transformer blocks already have the required signature:

    def call(self, seq_x: Tensor, seq_y: Tensor, training: bool = False, **kwargs) -> Tensor:
        ...
    @sanitize_input_shape
    def build(self, seq_x_shape, seq_y_shape):
        ...

I will keep you posted while @LarsKue and I try to figure out what’s going on.

OK, thank you for the update! If I can be of any help please let me know.

I am not sure if any of this is helpful, but just in case:

Confirming that `MultiHeadAttentionBlock.build` signature looks correct on my end:

import inspect
from bayesflow.networks.transformers.time_series_transformer import MultiHeadAttentionBlock
print(inspect.getsource(MultiHeadAttentionBlock.build))
@sanitize_input_shape
def build(self, seq_x_shape, seq_y_shape):
    self.call(keras.ops.zeros(seq_x_shape), keras.ops.zeros(seq_y_shape))

Contents of requirements file:

absl-py==2.3.1
asttokens==3.0.0
bayesflow==2.0.7
contourpy==1.3.3
cycler==0.12.1
decorator==5.2.1
executing==2.2.1
fonttools==4.60.1
h5py==3.15.1
ipython==9.6.0
ipython-pygments-lexers==1.1.1
jax==0.7.1
jaxlib==0.7.1
jedi==0.19.2
keras==3.12.0
kiwisolver==1.4.9
markdown-it-py==4.0.0
matplotlib==3.10.7
matplotlib-inline==0.2.1
mdurl==0.1.2
ml-dtypes==0.5.3
namex==0.1.0
numpy==1.26.4
opt-einsum==3.4.0
optree==0.17.0
packaging==25.0
pandas==2.3.3
parso==0.8.5
pexpect==4.9.0
pillow==12.0.0
prompt-toolkit==3.0.52
ptyprocess==0.7.0
pure-eval==0.2.3
pygments==2.19.2
pyparsing==3.2.5
python-dateutil==2.9.0.post0
pytz==2025.2
rich==14.2.0
scipy==1.16.3
seaborn==0.13.2
six==1.17.0
stack-data==0.6.3
tqdm==4.67.1
traitlets==5.14.3
typing-extensions==4.15.0
tzdata==2025.2
wcwidth==0.2.14

At least part of this error may have been due to something silly on my end: I was activating the venv prior to loading relevant modules (module load python) on the HCP. So perhaps for example some paths weren’t set appropriately.

Using the correct order, the example now fails on SkipRecurrentNet with the same type of error.

For layer 'SkipRecurrentNet', Received `build()` argument `self`, which does not end in `_shape`.

It appears that SkipRecurrentNet build and call have incompatible signatures (If I am reading this correctly):

print(inspect.getsource(SkipRecurrentNet.build))
    @sanitize_input_shape
    def build(self, input_shape):
        super().build(input_shape)

print(inspect.getsource(SkipRecurrentNet.call))
    def call(self, time_series: Tensor, training: bool = False, **kwargs) -> Tensor:
        direct_summary = self.recurrent(time_series, training=training)
        skip_summary = self.skip_recurrent(self.skip_conv(time_series), training=training)
        return keras.ops.concatenate((direct_summary, skip_summary), axis=-1)

Apologies for the hassle & confusion here, & hope I’m not confounding things further

3 Likes

Ok, I think this may be in fact an upstream problem that requires all shape and input arguments to have the same nomenclature. Are you able to learn the workflow with, say, the TimeSeriesTransformer instead?

Possibly, I’ll look into it. Thank you!

I actually was able to reproduce this error in this Github Actions which I hope may help. GH Actions Link in post . I think this error is not due to bug in bayesflow or Keras but is because an error in inspect module of python in some python versions which was fixed in the later versions. See the bug https://bugs.python.org/issue24298 .

1 Like