RL in 0 AD: Custom State and Action Spaces
In this post, we will be exploring defining our own state and action spaces for our RL agent. The initial introduction post, we trained an RL agent to micro ranged cavalry units against a small army of infantry. Not only is this a simplified scenario but we also presented the “world” (and possible actions) to the agent in a simple, easy-to-learn form.
First Steps with RL in 0 A.D.
Machine learning and reinforcement learning have been making impressive strides across a variety of domains from videogames to robotics. In this post, we will show how you can get up and running with reinforcement learning within 0 A.D., an open source RTS game! Before we start, we will be assuming some background knowledge of the key concepts in reinforcement learning and familiarity with OpenAI gym. Another good resource for learning about state and actions spaces is available on the OpenAI gym website!
Intro to Race Conditions using NetsBlox
Background
Sharing can be tough for both young children and parallel applications. In both cases, a lack of coordination can result in undesirable results. Although they may be equally challenging, I will restrict this blog post to focus on the latter case. Perhaps one of the most significant challenges can occur around race conditions. A race condition (or data race) can occur when two or more agents/processes try to access the same data at the same time. This post will be focusing specifically on both high and low-level data races and will show examples in NetsBlox.
Automatic Differentiation in NetsBlox
Background
Automatic differentiation is a method for automatically evaluating the derivative of a function for a given input. It is similar to symbolic differentiation and numerical differentiation. Unlike symbolic differentiation which yields a function given an input function of which we would like to take the derivative, automatic differentiation simply yields the derivative evaluated at the given input (rather than the symbolic representation of the actual derivative).