High-fidelity synthetic images with Unreal Engine, UnrealCV, and a touch of ROS for flavour

High quality simulation is the next big thing in Robotic Vision. Everyone is doing it. You can too, with the help of Unreal Engine and UnrealCV.

UnrealCV is a plugin for Unreal Engine to make it easy to generate high visual fidelity image data from a virtual scene. Their web side can be found at http://unrealcv.org/. A demonstration simulator can be downloaded from their website (http://unrealcv.org/reference/model_zoo.html#realistic_rendering). Additional simulators must be built using Unreal Engine, but scenes and scene assets can be purchased or acquired from around the web.

UnrealCV provides a python API that allows you to send string requests to a running Unreal instance and either get or set properties on objects in the scene, like the camera location, or get a variety of different views from the camera. A partial list of the available from the documentation (http://unrealcv.org/reference/commands.html), but the full list can only be found in the source code. UnrealCV is under active development, and new commands may be added over time.

But all this is too hard, and all you really want is to provide camera pose in, and get an image out. Well, we can help! Instead of worrying about all of that, you can use our UnrealCV-ROS bindings (https://github.com/jskinn/unrealcv-ros). Simply start the demo UnrealCV instance, and start simple_node.py. Presto! There is now a ROS service registered that takes camera pose as an input (geometry_msgs/Pose) and returns the image at that point (sensor_msgs/Image). Easy, suitable for generating virtual datasets, testing SLAM systems or simple robots, simulating a real camera, or creating an awesome demo.

Key Links:

Unreal Engine: https://www.unrealengine.com

UnrealCV: http://unrealcv.org

UnrealCV-ROS bindings: https://github.com/jskinn/unrealcv-ros


Australian Centre for Robotic Vision
2 George Street Brisbane, 4001
+61 7 3138 7549