
    &`i}                         d dl mZ e G d de                      Ze G d de                      Ze G d de                      Ze G d d	e                      Zd
ZdZdZ	dZ
dZdZdS )    )	PublicAPIc                       e Zd ZdZdS )UnsupportedSpaceExceptionz5Error for an unsupported action or observation space.N__name__
__module____qualname____doc__     i/home/jaya/work/projects/VOICE-AGENT/VIET/agent-env/lib/python3.11/site-packages/ray/rllib/utils/error.pyr   r      s        ??Dr   r   c                       e Zd ZdZdS )EnvErrorz@Error if we encounter an error during RL environment validation.Nr   r   r   r   r   r      s        JJDr   r   c                       e Zd ZdZdS )MultiAgentEnvErrorzHError if we encounter an error during MultiAgentEnv stepping/validation.Nr   r   r   r   r   r      s        RRDr   r   c                       e Zd ZdZdS )NotSerializablez>Error if we encounter objects that can't be serialized by ray.Nr   r   r   r   r   r      s        HHDr   r   a  Found {} GPUs on your machine (GPU devices found: {})! If your
    machine does not have any GPUs, you should set the config keys
    `num_gpus_per_learner` and `num_gpus_per_env_runner` to 0. They may be set to
    1 by default for your particular RL algorithm.a  The env string you provided ('{}') is:
a) Not a supported or an installed environment.
b) Not a tune-registered environment creator.
c) Not a valid env class string.

Try one of the following:
a) For Atari support: `pip install gymnasium[atari]` and prefix the environment name with `ale_py:`, for example, `"ale_py:ALE/Pong-v5"`.
b) To register your custom env, do `from ray import tune; tune.register_env('[name]', lambda cfg: [return env obj from here using cfg])`.
   Then in your config, do `config.environment(env='[name]').
c) Make sure you provide a fully qualified classpath, e.g.:
   `ray.rllib.examples.envs.classes.repeat_after_me_env.RepeatAfterMeEnv`
a  Your environment ({}) does not abide to the new gymnasium-style API!
From Ray 2.3 on, RLlib only supports the new (gym>=0.26 or gymnasium) Env APIs.
{}
Learn more about the most important changes here:
https://github.com/openai/gym and here: https://github.com/Farama-Foundation/Gymnasium

In order to fix this problem, do the following:

1) Run `pip install gymnasium` on your command line.
2) Change all your import statements in your code from
   `import gym` -> `import gymnasium as gym` OR
   `from gym.spaces import Discrete` -> `from gymnasium.spaces import Discrete`

For your custom (single agent) gym.Env classes:
3.1) Either wrap your old Env class via the provided `from gymnasium.wrappers import
     EnvCompatibility` wrapper class.
3.2) Alternatively to 3.1:
 - Change your `reset()` method to have the call signature 'def reset(self, *,
   seed=None, options=None)'
 - Return an additional info dict (empty dict should be fine) from your `reset()`
   method.
 - Return an additional `truncated` flag from your `step()` method (between `done` and
   `info`). This flag should indicate, whether the episode was terminated prematurely
   due to some time constraint or other kind of horizon setting.

For your custom RLlib `MultiAgentEnv` classes:
4.1) Either wrap your old MultiAgentEnv via the provided
     `from ray.rllib.env.wrappers.multi_agent_env_compatibility import
     MultiAgentEnvCompatibility` wrapper class.
4.2) Alternatively to 4.1:
 - Change your `reset()` method to have the call signature
   'def reset(self, *, seed=None, options=None)'
 - Return an additional per-agent info dict (empty dict should be fine) from your
   `reset()` method.
 - Rename `dones` into `terminateds` and only set this to True, if the episode is really
   done (as opposed to has been terminated prematurely due to some horizon/time-limit
   setting).
 - Return an additional `truncateds` per-agent dictionary flag from your `step()`
   method, including the `__all__` key (100% analogous to your `dones/terminateds`
   per-agent dict).
   Return this new `truncateds` dict between `dones/terminateds` and `infos`. This
   flag should indicate, whether the episode (for some agent or all agents) was
   terminated prematurely due to some time constraint or other kind of horizon setting.
a8  Could not save keras model under self[TfPolicy].model.base_model!
    This is either due to ..
    a) .. this Policy's ModelV2 not having any `base_model` (tf.keras.Model) property
    b) .. the ModelV2's `base_model` not being used by the Algorithm and thus its
       variables not being properly initialized.
zCould not save torch model under self[TorchPolicy].model!
    This is most likely due to the fact that you are using an Algorithm that
    uses a Catalog-generated TorchModelV2 subclass, which is torch.save() cannot pickle.
a  
To change the config for `tune.Tuner().fit()` in a script: Modify the python dict
  passed to `tune.Tuner(param_space=[...]).fit()`.
To change the config for an RLlib Algorithm instance: Modify the python dict
  passed to the Algorithm's constructor, e.g. `PPO(config=[...])`.
N)ray.rllib.utils.annotationsr   	Exceptionr   r   r   r   ERR_MSG_NO_GPUSERR_MSG_INVALID_ENV_DESCRIPTORERR_MSG_OLD_GYM_API)ERR_MSG_TF_POLICY_CANNOT_SAVE_KERAS_MODEL&ERR_MSG_TORCH_POLICY_CANNOT_SAVE_MODELHOWTO_CHANGE_CONFIGr   r   r   <module>r      s%   1 1 1 1 1 1 	 	 	 	 		 	 	 	 	 	 	 	 	y 	 	 	 	 	 	 	 	 	 	 	 	 	 	 	 	i 	 	 	6
" + \- )* &   r   