Happy holidays from the Unity ML-Agents team!

On behalf of the Unity ML-Agents team, we want to wish everyone and their loved ones a happy holiday and new year!  As we close out 2020, we wanted to take a moment to highlight a few of our favorite community projects in 2020, recap our progress since our v1.0 release (Release 1) in April 2020, and provide an overview of what’s in store for 2021.

A few of our favorite community projects

Thank you to our entire community for all your contributions and feedback to the growth and evolution of the Unity ML-Agents Toolkit.  We continue to be amazed by the creativity of our developers in illustrating new kinds of behaviors and approaches using deep learning.  As we close out the year, we wanted to showcase some of our favorite projects in 2020.  If you would like to share your projects, please share them in our forum. If you share your project on social media, remember to tag your posts using #mlagents.

A.I learns to play a game with an Xbox controller 

From the virtual world to the real world and back into the virtual world.  Created by LittleFrenchKev.




AI Learns Parallel Parking – Deep Reinforcement Learning

If only we could create an Agent to parallel park our cars in real life. Created by SamuelArzt.




Unity ML-Agents robot simulation transferred to a real-life robot

Illustration of an ML-Agents trained model being transferred into a real-life robot. Created by jsalli.

[embedded content]


Competitive Self-Play | Unity ML-Agents

Awesome graphics, great music, and a great illustration of self-play by mbaske.



Recap since ML-Agents Release 1

Release 1, which came out in April 2020, was heavily centered around API stability, ease of installation, and releasing a verified Unity package.  Since Release 1, we have prioritized shipping incremental improvements and bug fixes on a monthly basis in order to improve the stability of existing features.  All of the notes and documentation for these improvements and bug fixes can be found in the release notes here.

In addition to these improvements and bug fixes, we have also shipped several new features to support training intelligent Agents in Unity projects.

  • Observable Attributes – Enables developers to mark up Agent fields and properties to be turned into observations via reflection
  • IActuator interface and ActuatorComponent – Enables developers to compose behaviors onto Agents.  Allows for abstraction of the Agent actions.
  • Stacking for compressed observations – Allows stacking for visual observations and other multi-dimensional observations.
  • Grid Sensor – Combines the generality of data extraction from raycasts with the computational efficiency of CNNs.  Allows the collection of arbitrary data from any number of GameObjects while enabling much faster simulation and training
  • Random Network Distillation (RND) – An intrinsic reward signal to the Pytorch trainers that promotes exploration by rewarding Agents for discovering new observations,
  • Support for discrete and continuous actions – Individual Agents can now take both continuous and discrete actions which better represents game development scenarios like gamepads.
  • Unity Environment Registry – Database of pre-built Unity environments that can be easily used without having to install the Unity Editor
  • PyTorch Trainers – All existing reinforcement learning and imitation learning training algorithms have been migrated from TensorFlow to PyTorch. Moving forward, all
    Continue reading

    This post was originally published on this site