Skip to content

Commit

Permalink
find and replace for ML-Agents Toolkit (Unity-Technologies#3799)
Browse files Browse the repository at this point in the history
* find and replace for ML-Agents Toolkit

* apply

* install ruby

* apt-get update

* Update config.yml
  • Loading branch information
Chris Elion authored Apr 22, 2020
1 parent b4efae8 commit e88ce26
Show file tree
Hide file tree
Showing 12 changed files with 42 additions and 28 deletions.
3 changes: 3 additions & 0 deletions .circleci/config.yml
Original file line number Diff line number Diff line change
Expand Up @@ -107,6 +107,9 @@ jobs:
- run:
name: Install Dependencies
command: |
# Need ruby for search-and-replace
sudo apt-get update
sudo apt-get install ruby-full
python3 -m venv venv
. venv/bin/activate
pip install --upgrade pip
Expand Down
7 changes: 7 additions & 0 deletions .pre-commit-config.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -72,6 +72,13 @@ repos:
)$
args: [--score=n]

- repo: https://github.com/mattlqx/pre-commit-search-and-replace
rev: v1.0.3
hooks:
- id: search-and-replace
types: [markdown]
exclude: ".*localized.*"

# "Local" hooks, see https://pre-commit.com/#repository-local-hooks
- repo: local
hooks:
Expand Down
4 changes: 4 additions & 0 deletions .pre-commit-search-and-replace.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
- description: Replace "ML agents toolkit", "ML-Agents toolkit" etc
search: /ML[ -]Agents toolkit/
replacement: ML-Agents Toolkit
insensitive: true
6 changes: 3 additions & 3 deletions com.unity.ml-agents/CONTRIBUTING.md
Original file line number Diff line number Diff line change
@@ -1,8 +1,8 @@
# Contribution Guidelines

Thank you for your interest in contributing to the ML-Agents toolkit! We are
Thank you for your interest in contributing to the ML-Agents Toolkit! We are
incredibly excited to see how members of our community will use and extend the
ML-Agents toolkit. To facilitate your contributions, we've outlined a brief set
ML-Agents Toolkit. To facilitate your contributions, we've outlined a brief set
of guidelines to ensure that your extensions can be easily integrated.

## Communication
Expand All @@ -11,7 +11,7 @@ First, please read through our [code of conduct](https://github.com/Unity-Techno
expect all our contributors to follow it.

Second, before starting on a project that you intend to contribute to the
ML-Agents toolkit (whether environments or modifications to the codebase), we
ML-Agents Toolkit (whether environments or modifications to the codebase), we
**strongly** recommend posting on our
[Issues page](https://github.com/Unity-Technologies/ml-agents/issues)
and briefly outlining the changes you plan to make. This will enable us to
Expand Down
2 changes: 1 addition & 1 deletion docs/API-Reference.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ and run the following command within the `docs/` directory:
doxygen dox-ml-agents.conf
```

`dox-ml-agents.conf` is a Doxygen configuration file for the ML-Agents toolkit
`dox-ml-agents.conf` is a Doxygen configuration file for the ML-Agents Toolkit
that includes the classes that have been properly formatted. The generated HTML
files will be placed in the `html/` subdirectory. Open `index.html` within that
subdirectory to navigate to the API reference home. Note that `html/` is already
Expand Down
6 changes: 3 additions & 3 deletions docs/Learning-Environment-Design.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ state, then the agent receives no reward or a negative reward (punishment). As
the agent learns during training, it optimizes its decision making so that it
receives the maximum reward over time.

The ML-Agents toolkit uses a reinforcement learning technique called
The ML-Agents Toolkit uses a reinforcement learning technique called
[Proximal Policy Optimization (PPO)](https://blog.openai.com/openai-baselines-ppo/).
PPO uses a neural network to approximate the ideal function that maps an agent's
observations to the best action an agent can take in a given state. The
Expand Down Expand Up @@ -59,7 +59,7 @@ information.

## Organizing the Unity Scene

To train and use the ML-Agents toolkit in a Unity scene, the scene as many Agent subclasses as you need.
To train and use the ML-Agents Toolkit in a Unity scene, the scene as many Agent subclasses as you need.
Agent instances should be attached to the GameObject representing that Agent.

### Academy
Expand Down Expand Up @@ -125,7 +125,7 @@ about programming your own Agents.

## Environments

An _environment_ in the ML-Agents toolkit can be any scene built in Unity. The
An _environment_ in the ML-Agents Toolkit can be any scene built in Unity. The
Unity scene provides the environment in which agents observe, act, and learn.
How you set up the Unity scene to serve as a learning environment really depends
on your goal. You may be trying to solve a specific reinforcement learning
Expand Down
26 changes: 13 additions & 13 deletions docs/ML-Agents-Overview.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,14 +10,14 @@ game developers and hobbyists to easily train intelligent agents for 2D, 3D and
VR/AR games. These trained agents can be used for multiple purposes, including
controlling NPC behavior (in a variety of settings such as multi-agent and
adversarial), automated testing of game builds and evaluating different game
design decisions pre-release. The ML-Agents toolkit is mutually beneficial for
design decisions pre-release. The ML-Agents Toolkit is mutually beneficial for
both game developers and AI researchers as it provides a central platform where
advances in AI can be evaluated on Unity’s rich environments and then made
accessible to the wider research and game developer communities.

Depending on your background (i.e. researcher, game developer, hobbyist), you
may have very different questions on your mind at the moment. To make your
transition to the ML-Agents toolkit easier, we provide several background pages
transition to the ML-Agents Toolkit easier, we provide several background pages
that include overviews and helpful resources on the [Unity
Engine](Background-Unity.md), [machine learning](Background-Machine-Learning.md)
and [TensorFlow](Background-TensorFlow.md). We **strongly** recommend browsing
Expand All @@ -26,7 +26,7 @@ machine learning concepts or have not previously heard of TensorFlow.

The remainder of this page contains a deep dive into ML-Agents, its key
components, different training modes and scenarios. By the end of it, you should
have a good sense of _what_ the ML-Agents toolkit allows you to do. The
have a good sense of _what_ the ML-Agents Toolkit allows you to do. The
subsequent documentation pages provide examples of _how_ to use ML-Agents.

## Running Example: Training NPC Behaviors
Expand Down Expand Up @@ -104,14 +104,14 @@ the process of learning a policy through running simulations is called the
**training phase**, while playing the game with an NPC that is using its learned
policy is called the **inference phase**.

The ML-Agents toolkit provides all the necessary tools for using Unity as the
The ML-Agents Toolkit provides all the necessary tools for using Unity as the
simulation engine for learning the policies of different objects in a Unity
environment. In the next few sections, we discuss how the ML-Agents toolkit
environment. In the next few sections, we discuss how the ML-Agents Toolkit
achieves this and what features it provides.

## Key Components

The ML-Agents toolkit is a Unity plugin that contains three high-level
The ML-Agents Toolkit is a Unity plugin that contains three high-level
components:

- **Learning Environment** - which contains the Unity scene and all the game
Expand Down Expand Up @@ -157,9 +157,9 @@ medics (medics and drivers have different actions).
border="10" />
</p>

_Example block diagram of ML-Agents toolkit for our sample game._
_Example block diagram of ML-Agents Toolkit for our sample game._

We have yet to discuss how the ML-Agents toolkit trains behaviors, and what role
We have yet to discuss how the ML-Agents Toolkit trains behaviors, and what role
the Python API and External Communicator play. Before we dive into those
details, let's summarize the earlier components. Each character is attached to
an Agent, and each Agent has a Behavior. The Behavior can be thought as a function
Expand Down Expand Up @@ -189,7 +189,7 @@ inference can proceed.

### Built-in Training and Inference

As mentioned previously, the ML-Agents toolkit ships with several
As mentioned previously, the ML-Agents Toolkit ships with several
implementations of state-of-the-art algorithms for training intelligent agents.
More specifically, during training, all the medics in the
scene send their observations to the Python API through the External
Expand Down Expand Up @@ -217,7 +217,7 @@ tutorial covers this training mode with the **3D Balance Ball** sample environme

In the previous mode, the Agents were used for training to generate
a TensorFlow model that the Agents can later use. However,
any user of the ML-Agents toolkit can leverage their own algorithms for
any user of the ML-Agents Toolkit can leverage their own algorithms for
training. In this case, the behaviors of all the Agents in the scene
will be controlled within Python.
You can even turn your environment into a [gym.](../gym-unity/README.md)
Expand Down Expand Up @@ -260,7 +260,7 @@ update the random policy to a more meaningful one that is successively improved
as the environment gradually increases in complexity. In our example, we can
imagine first training the medic when each team only contains one player, and
then iteratively increasing the number of players (i.e. the environment
complexity). The ML-Agents toolkit supports setting custom environment
complexity). The ML-Agents Toolkit supports setting custom environment
parameters within the Academy. This allows elements of the environment related
to difficulty or complexity to be dynamically adjusted based on training
progress.
Expand Down Expand Up @@ -330,7 +330,7 @@ inspiration:

## Additional Features

Beyond the flexible training scenarios available, the ML-Agents toolkit includes
Beyond the flexible training scenarios available, the ML-Agents Toolkit includes
additional features which improve the flexibility and interpretability of the
training process.

Expand Down Expand Up @@ -370,7 +370,7 @@ training process.

## Summary and Next Steps

To briefly summarize: The ML-Agents toolkit enables games and simulations built
To briefly summarize: The ML-Agents Toolkit enables games and simulations built
in Unity to serve as the platform for training intelligent agents. It is
designed to enable a large variety of training modes and scenarios and comes
packed with several features to enable researchers and developers to leverage
Expand Down
2 changes: 1 addition & 1 deletion docs/Readme.md
Original file line number Diff line number Diff line change
Expand Up @@ -66,7 +66,7 @@

## Translations

To make the Unity ML-Agents toolkit accessible to the global research and Unity
To make the Unity ML-Agents Toolkit accessible to the global research and Unity
developer communities, we're attempting to create and maintain translations of
our documentation. We've started with translating a subset of the documentation
to one language (Chinese), but we hope to continue translating more pages and to
Expand Down
2 changes: 1 addition & 1 deletion docs/Training-Imitation-Learning.md
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,7 @@ See Behavioral Cloning + GAIL + Curiosity + RL below.
width="700" border="0" />
</p>

The ML-Agents toolkit provides two features that enable your agent to learn from demonstrations.
The ML-Agents Toolkit provides two features that enable your agent to learn from demonstrations.
In most scenarios, you can combine these two features.

* GAIL (Generative Adversarial Imitation Learning) uses an adversarial approach to
Expand Down
2 changes: 1 addition & 1 deletion docs/Training-PPO.md
Original file line number Diff line number Diff line change
Expand Up @@ -50,7 +50,7 @@ the agent for exploring new states, rather than just when an explicit reward is
Furthermore, we could mix reward signals to help the learning process.

Using `reward_signals` allows you to define [reward signals.](Reward-Signals.md)
The ML-Agents toolkit provides three reward signals by default, the Extrinsic (environment)
The ML-Agents Toolkit provides three reward signals by default, the Extrinsic (environment)
reward signal, the Curiosity reward signal, which can be used to encourage exploration in
sparse extrinsic reward environments, and the GAIL reward signal. Please see [Reward Signals](Reward-Signals.md)
for additional details.
Expand Down
6 changes: 3 additions & 3 deletions docs/Unity-Inference-Engine.md
Original file line number Diff line number Diff line change
@@ -1,12 +1,12 @@
# Unity Inference Engine

The ML-Agents toolkit allows you to use pre-trained neural network models
The ML-Agents Toolkit allows you to use pre-trained neural network models
inside your Unity games. This support is possible thanks to the Unity Inference
Engine. The Unity Inference Engine is using
[compute shaders](https://docs.unity3d.com/Manual/class-ComputeShader.html)
to run the neural network within Unity.

__Note__: The ML-Agents toolkit only supports the models created with our
__Note__: The ML-Agents Toolkit only supports the models created with our
trainers.

## Supported devices
Expand Down Expand Up @@ -40,6 +40,6 @@ tf2onnx does not currently support tensorflow 2.0.0 or later, or earlier than 1.
When using a model, drag the model file into the **Model** field in the Inspector of the Agent.
Select the **Inference Device** : CPU or GPU you want to use for Inference.

**Note:** For most of the models generated with the ML-Agents toolkit, CPU will be faster than GPU.
**Note:** For most of the models generated with the ML-Agents Toolkit, CPU will be faster than GPU.
You should use the GPU only if you use the
ResNet visual encoder or have a large number of agents with visual observations.
4 changes: 2 additions & 2 deletions docs/Using-Tensorboard.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# Using TensorBoard to Observe Training

The ML-Agents toolkit saves statistics during learning session that you can view
The ML-Agents Toolkit saves statistics during learning session that you can view
with a TensorFlow utility named,
[TensorBoard](https://www.tensorflow.org/programmers_guide/summaries_and_tensorboard).

Expand Down Expand Up @@ -34,7 +34,7 @@ graphs.
When you run the training program, `mlagents-learn`, you can use the
`--save-freq` option to specify how frequently to save the statistics.

## The ML-Agents toolkit training statistics
## The ML-Agents Toolkit training statistics

The ML-Agents training program saves the following statistics:

Expand Down

0 comments on commit e88ce26

Please sign in to comment.