Usage Guide
This guide will walk you through a basic example of how to use ViVa-SAFELAND.
First, make sure you have the necessary dependencies installed. Then, you can run the simulation directly from your terminal
Quick Start
Graphical User Interface (GUI)
You can start the simulator with this command line:

This command will start a GUI where you can configure the simulation parameters:
- Video Drone Configuration:
- Video Directory: This is the directory where your video files are located.
- Video File: This is the specific video file you want to use for the simulation.
- Environment Configuration:
- FPS: This is the frames per second setting for the simulation (default is 30).
- Show FPS: If set, the FPS will be displayed on the terminal.
- Fixed: If set, the background will be only the first frame of the video sequence.
- Use Auto Altitude: If set, the initial altitude will be automatically determined from the video metadata (*.SRT file), default is
True. - Altitude: If you don't have the
.SRTfile or you want to specify a different altitude, you can use this option, this option is enabled whenUse Auto Altitudeis unchecked.
- Simulation Control:
- Start: Begin the simulation with the configured parameters.
- Stop: Stop the simulation.
Command Line Interface (CLI)
Alternatively, you can run the simulator specifying the options directly in terminal, this method does not activate the GUI.
Options
You can customize the simulation as in the GUI with the options:
--help: Show info about the usage of viva.--render-fps <fps>: Set the frames per second for rendering (default is 30).--fixed: If set, the background will be only the first frame of the video sequence.--rel-alt-value <initial_altitude>: Set the initial relative altitude of the drone in meters. If not provided, it will be extracted from the video metadata or SRT file.--show-fps-flag: If set, the FPS will be displayed on the terminal.--no-hover-z: Disable automatic altitude control (Z axis).--no-hover-xy: Disable automatic position control (X and Y axes).--no-perspective: Disable perspective transformation.
Example with options:
Controls
You can start to move the drone with the keyboard or a joystick if you connect one. The default controls are:

Thrust and orientation
- W/S: Increase/Decrease Thrust
- A/D: Yaw anticlockwise/clockwise
Movement
- Up/Down Arrow: Move Forward/Backward
- Left/Right Arrow: Move Left/Right
Special Actions
- R: Reset drone state
- Esc: Exit from the simulation

Thrust and orientation
- Left Stick Up/Down: Increase/Decrease Thrust
- Left Stick Left/Right: Yaw anticlockwise/clockwise
Movement
- Right Stick Up/Down: Move Forward/Backward
- Right Stick Left/Right: Move Left/Right
Special Actions
- BACK button: Reset drone state
- B button: Exit from the simulation
Note
The controls of the joystick are universally mapped, so the positions of the sticks and buttons are the same for all joysticks.
Application Usage
If you want to use viva to develop your own application, you can use the DroneEnv class from the viva package. Below is a minimal example of how to use it:
Useful information
obs is a RGB array representing the current drone view and one of the most important outputs of the environment, you can treat it as an image and apply your computer vision algorithms on it.
The full documentation of the DroneEnv class can be found in the API reference.
Example with Controller
To add a controller to the drone you can import the HMI class from the viva package and use it to control the drone with a keyboard or joystick. Below is an example of how to integrate the controller:
For full documentation of the HMI class, see the API reference.
Example with Motion Blur
You can also apply post-processing effects to the drone's view. One such effect is Motion Blur, which simulates the blur caused by the drone's movement and shutter speed. This is useful for creating more realistic visual datasets for training.
Adding Custom Effects
ViVa-SAFELAND is designed to be extensible. If you need to simulate other conditions, such as different lighting, weather effects, or camera distortions, you can easily do so by creating a custom class and applying it to the obs array returned by the environment.
The obs output is a standard NumPy array (RGB), making it compatible with libraries like OpenCV, PIL, or any deep learning framework.
Example: Simulating Low Light
If you want to simulate different times of the day or low-light conditions, you can implement a simple class to adjust the brightness:
import numpy as np
class LightingEffect:
def __init__(self, brightness=0.5):
self.brightness = brightness # 1.0 is original, < 1.0 is darker
def apply(self, obs):
# Simply scale the pixel values
return (obs * self.brightness).astype(np.uint8)
# Usage in the simulation loop:
# effect = LightingEffect(brightness=0.3)
# darkened_obs = effect.apply(obs)
This modular approach allows you to stack multiple effects (e.g., Motion Blur + Low Light) to create complex and specialized datasets for your drone's vision system.