API

Contents

API#

Type aliases#

type Vector#
Canonical:

numpy.ndarray[tuple[typing.Literal[2]], numpy.dtype[numpy.float64]]

A two dimensional vector in cm.

type Polygon#
Canonical:

Sequence[Vector]

A sequence of vertices ordered counter-clockwise.

type Part#
Canonical:

tuple[Polygon, float] | tuple[Polygon, float, Sequence[Color]]

An object (prismatic) part defined as (base, height) or (base, height, face_colors) where face_colors and base must have the same length.

Warning

At the moment textures (face_colors) are used to compute the sensors ( cameras) response but are ignored when displaying the object.

Color#

class Color#

Bases: pybind11_object

Parameters:
  • r (float) – Red channel, in [0, 1], optional (default 0.0)

  • g (float) – Green channel, in [0, 1], optional (default 0.0)

  • b (float) – Blue channel, in [0, 1], optional (default 0.0)

  • a (float) – Alpha channel, in [0, 1], optional (default 1.0)

An RGBA color with values between 0.0 and 1.0.

black#

Black color (readonly)

Type:

Color

gray#

Gray color (readonly)

Type:

Color

white#

White color (readonly)

Type:

Color

red#

Red color (readonly)

Type:

Color

green#

Green color (readonly)

Type:

Color

blue#

Blue color (readonly)

Type:

Color

r#

Red channel, in [0, 1]

Type:

float

g#

Green channel, in [0, 1]

Type:

float

b#

Blue channel, in [0, 1]

Type:

float

a#

Alpha channel, in [0, 1]

Type:

float

components#

Components, in [0, 1]

Type:

tuple[float, float, float, float]

threshold(limits: Color) None#

Threshold the color using limit. For each component, if value is below limit, set it to 0

Parameters:

limit (Color) – the threshold

toGray() float#

Return the grey level value

Returns:

the average intensity of the channels.

Return type:

float

PhysicalObject#

class PhysicalObject#

Bases: pybind11_object

The superclass of objects that can be simulated.

uid#

A unique identifier (readonly)

Type:

int

world#

The world the object belongs to (readonly)

Type:

World

has_collided#

Whether the object has collided in the last simulation step (readonly)

Type:

bool

name#

The name (readonly)

Type:

string

radius#

The radius of the object’s enclosing circle in centimeters (readonly)

Type:

float

height#

The object height in centimeters (readonly)

Type:

float

is_cylindric#

True if the object is cylindrical shaped (readonly)

Type:

bool

mass#

The object mass in kilograms. If below zero, the object is static (readonly)

Type:

float

moment_of_inertia#

The object moment of inertial (readonly)

Type:

float

color#

The object color.

Type:

Color

collision_elasticity#

Elasticity of collisions of this object. If 0, soft collision, 100% energy dissipation; if 1, elastic collision, 0% energy dissipation. Actual elasticity is the product of the elasticity of the two colliding objects. Walls are fully elastics

Type:

float

dry_friction_coefficient#

The dry friction coefficient mu

Type:

float

viscous_friction_coefficient#

The viscous friction coefficient. Premultiplied by mass. A value of k applies a force of -k * speed * mass

Type:

float

viscous_moment_friction_coefficient#

The viscous friction moment coefficient. Premultiplied by momentOfInertia. A value of k applies a force of -k * speed * moment_of_inertia

Type:

float

position#

The position in the world frame in centimeters

Type:

Vector

angle#

The orientation in the world frame in radians

Type:

float

velocity#

The velocity in the world frame in centimeters per second

Type:

Vector

angular_speed#

The angular speed in the world frame in radians per second

Type:

float

collision_callback#

An optional function called when the object perform a control step.

Type:

Callable[[PhysicalObject, PhysicalObject], None] | None

control_step_callback#

An optional function called when the object collides.

Type:

Callable[[PhysicalObject, float], None] | None

control_step(time_step: SupportsFloat) None#

The controller associated with the object.

Should be overridden by sub-classes to implement controllers, in particular for robots. Alternatively, users can assign a callback control_step_callback.

Parameters:

time_step (float) – The time step of the simulation.

RectangularObject(l1: SupportsFloat, l2: SupportsFloat, height: SupportsFloat, mass: SupportsFloat, color: Color = Color(r=0.0, g=0.0, b=0.0, a=1.0)) PhysicalObject#

Creates a rectangular prism.

Parameters:
  • l1 (float) – the side length in cm (x).

  • l2 (float) – the side length in cm (y).

  • height (float) – The height in cm.

  • mass (float) – The mass in kg.

  • color (Color) – The color.

Returns

PhysicalObject: A rectangular prism.

CircularObject(radius: SupportsFloat, height: SupportsFloat, mass: SupportsFloat, color: Color = Color(r=0.0, g=0.0, b=0.0, a=1.0)) PhysicalObject#

Creates a cylinder.

Parameters:
  • radius (float) – The radius in cm.

  • height (float) – The height in cm.

  • mass (float) – The mass in kg.

  • color (Color) – The color.

Returns

PhysicalObject: A cylinder

CompositeObject(parts: list, mass: SupportsFloat, color: Color = Color(r=0.0, g=0.0, b=0.0, a=1.0)) PhysicalObject#

Creates an object composed of parts.

Parameters:
  • parts (Sequence[Part]) – A sequence of parts.

  • mass (float) – The mass in kg.

  • color (Color) – The color.

Returns

PhysicalObject: A composed object.

Warning

At the moment textures (face_colors) are used to compute the sensors ( cameras) response but are ignored when displaying the object.

ConvexObject(base: Sequence[Vector], height: SupportsFloat, mass: SupportsFloat, color: Color = Color(r=0.0, g=0.0, b=0.0, a=1.0), face_colors: Sequence[Color] = []) PhysicalObject#

Creates an vertical prism with a convex polygonal base.

Parameters:
  • base (Polygon) – The vertices polygonal base in cm. Must be convex.

  • height (float) – The height in cm.

  • mass (float) – The mass in kg.

  • color (Color) – The color.

  • face_colors (Sequence[Color]) – if not empty, defines the colors of each face.

Returns

PhysicalObject: A convex prism.

Warning

At the moment textures (face_colors) are used to compute the sensors ( cameras) response but are ignored when displaying the object.

Robots#

Robot#

class Robot#

Bases: PhysicalObject

Base class for all robots

DifferentialWheeled#

class DifferentialWheeled#

Bases: Robot, PhysicalObject

The virtual base class shared by all robots currently implemented in enki.

left_wheel_target_speed#

The target left wheel speed in centimeters per second.

Type:

float

right_wheel_target_speed#

The target right wheel speed in centimeters per second.

Type:

float

left_wheel_encoder_speed#

The current left wheel speed in centimeters per second (readonly).

Type:

float

right_wheel_encoder_speed#

The current right wheel speed in centimeters per second (readonly).

Type:

float

left_wheel_odometry#

The left wheel odometry integrated from measured wheel speeds in centimeters (readonly).

Type:

float

right_wheel_odometry#

The right wheel odometry integrated from measured wheel speeds in centimeters (readonly).

Type:

float

wheel_axis#

The distance between wheels in cm (readonly).

Type:

float

max_wheel_speed#

The maximal wheel speed in centimeters per second (readonly).

Type:

float

wheel_speed_noise#

The relative noise applied to the target wheel speed at each control step.

Type:

float

reset_encoders() None#

Reset the odometry of both wheels.

E-puck#

class EPuck#

Bases: DifferentialWheeled, PhysicalObject

Parameters:
  • proximity (bool) – enable the proximity sensors

  • camera (bool) – enable the camera

  • scanner (bool) – enable the scanner

A DifferentialWheeled e-Puck robot.

The robots has a 60 pixels frontal looking camera placed at a height of 2.2 cm and with a fov of 60 degrees, and 8 infrared proximity sensors (maximal range 12 cm) placed at a height of 2.5 cm and at angles: -18, -45, -90, -142, 142, 90, 45, 18 degrees.

Example:

>>> import pyenki
>>> world = World()
>>> epuck = EPuck()
>>> world.add_object(epuck)
>>> epuck.position = (-10, 0)
>>> epuck.set_led_ring(True)
>>> world.add_object(CircularObject(2.0, 5.0, -1, Color(0.3, 0.7, 0)))
>>> world.step(0.1)
>>> epuck.prox_values
array([98.52232283,  3.30197324, ...
>>> epuck.camera_image
array([[0.5, 0.5, 0.5, 1. ] ...
prox_values#

An array of 8 proximity sensor readings, one for each sensors (readonly).

Type:

numpy.ndarray[tuple[int], numpy.dtype[numpy.float64]]

prox_distances#

An array of 8 distances between proximity sensor and nearest obstacles, one for each sensors (readonly). please note that this value would not directly be accessible by a real robot (readonly).

Type:

numpy.ndarray[tuple[int], numpy.dtype[numpy.float64]]

scan#

An array of 64 radial distances, ordered from -180 degrees to 180 degrees, in centimeters.

Type:

numpy.ndarray[tuple[int], numpy.dtype[numpy.float64]]

camera_image#

An rgba array between 0 and 1 of shape (60, 4) (readonly).

Type:

numpy.ndarray[tuple[int, int], numpy.dtype[numpy.float64]]

__init__(proximity: bool = True, camera: bool = False, scanner: bool = False) None#
set_led_ring(value: bool) None#

Set all the red LEDs on or off.

Parameters:

value (bool) – the desired LED state.

Marxbot#

class Marxbot#

Bases: DifferentialWheeled, PhysicalObject

A DifferentialWheeled Marxbot robot.

The robot has a planar, omni-directional scanner, placed centrally at a height of 11 cm, which detects surrounding objects (distance and color).

Example:

>>> import pyenki
>>> # create a world surrondded by a cylindrical wall.
>>> world = World(r=20.0, walls_color=Color(0.8, 0.2, 0.1))
>>> marxbot = Marxbot()
>>> world.add_object(marxbot)
>>> marxbot.position = (10.0, 0.0)
>>> marxbot.angle = 0.0
>>> # Spin the robot on itself
>>> marxbot.left_wheel_target_speed = 5.0
>>> marxbot.right_wheel_target_speed = -5.0
>>> world.step(0.1)
>>> # Read the omnidirectional rgbd camera
>>> # Distances
>>> marxbot.scanner_distances
array([30.        , 29.73731892, ...
>>> # Image
>>> marxbot.scanner_image
array([[0.8, 0.2, 0.1], ...
scanner_range#

the range of the scanner. Default is infinite.

Type:

float

scanner_distances#

An array of 180 radial distances, ordered from -180 degrees to 180 degrees, in centimeters (readonly).

Type:

numpy.ndarray[tuple[int], numpy.dtype[numpy.float64]]

scanner_image#

An rgba array between 0 and 1 of shape (180, 4) (readonly).

Type:

numpy.ndarray[tuple[int, int], numpy.dtype[numpy.float64]]

__init__() None#

Thymio2#

class IRCommEvent#

Bases: pybind11_object

This event is created each time a message is received by at least one proximity sensor. The sensors that do not receive the message, have the corresponding payloads and intensities set to zero.

rx_value#

The received message payload (readonly)

Type:

int

payloads#

An array of 7 integer payloads, one for each sensors (readonly). The first 5 entries are from frontal sensors ordered from left to right. The last two entries are from rear sensors ordered from left to right.

Type:

numpy.ndarray[tuple[int], numpy.dtype[numpy.int64]]

intensities#

An array of 7 integer intensities, one for each sensors (readonly). The first 5 entries are from frontal sensors ordered from left to right. The last two entries are from rear sensors ordered from left to right.

Type:

numpy.ndarray[tuple[int], numpy.dtype[numpy.int64]]

class Thymio2#

Bases: DifferentialWheeled, PhysicalObject

A DifferentialWheeled Thymio2 robot. Attribute names mimic the aseba interface, see http://wiki.thymio.org/en:thymioapi.

Example:

>>> import pyenki
>>> thymio = Thymio2()
>>> thymio.position = (3.2, 5.6)
>>> thymio.angle = 1.3
>>> # Spin the robot on itself
>>> thymio.motor_left_target = 10.2
>>> thymio.motor_right_target = -10.2
>>> # Switch the top LED yellow
>>> thymio.set_led_top(0.5, 0.5, 0.0)
>>> thymio.prox_values
array([   0.        ,    0., ...

For some methods and attributes there is an alternative version with the suffix _i which uses integers in the same units used by aseba. For example,

prox_values#

An array of 7 proximity sensor readings, one for each sensors (readonly). The first 5 entries are from frontal sensors ordered from left to right. The last two entries are from rear sensors ordered from left to right.

Type:

numpy.ndarray[tuple[int], numpy.dtype[numpy.float64]]

prox_values_i#

An array of 7 proximity sensor readings, one for each sensors (readonly). The first 5 entries are from frontal sensors ordered from left to right. The last two entries are from rear sensors ordered from left to right.

Type:

numpy.ndarray[tuple[int], numpy.dtype[numpy.int64]]

prox_distances#

A list of 7 distances between proximity sensor and nearest obstancle, one for each sensors; please note that this value would not directly be accessible by a real robot (readonly). The first 5 entries are from frontal sensors ordered from left to right. The last two entries are from rear sensors ordered from left to right.

Type:

numpy.ndarray[tuple[int], numpy.dtype[numpy.float64]]

prox_comm_tx#

The integer payload to be sent. The real robot can only send 11 bits, therefore to be compliant we should limit the value between 0 and 2047.

Type:

int

prox_comm_enabled#

Enable/disable proximity communication.

Type:

bool

prox_comm_events#

A list of events, one for every received message during the last control step (readonly).

Type:

list[IRCommEvent]

ground_values#

An array of 2 ground sensor readings, one for each sensors (readonly)

Type:

numpy.ndarray[tuple[int], numpy.dtype[numpy.float64]]

ground_values_i#

An array of 2 ground sensor readings, one for each sensors (readonly)

Type:

numpy.ndarray[tuple[int], numpy.dtype[numpy.int64]]

left_wheel_target_speed_i#

The target left wheel speed in ticks per second.

Type:

int

right_wheel_target_speed_i#

The target right wheel speed in ticks per second.

Type:

int

left_wheel_encoder_speed_i#

The current left wheel speed in ticks per second (readonly).

Type:

int

right_wheel_encoder_speed_i#

The current right wheel speed in ticks per second (readonly).

Type:

int

left_wheel_odometry_i#

The left wheel odometry integrated from measured wheel speeds in ticks (readonly).

Type:

int

right_wheel_odometry_i#

The right wheel odometry integrated from measured wheel speeds in ticks (readonly).

Type:

int

__init__() None#

TEST

set_led_bottom_left(red: SupportsFloat = 0, green: SupportsFloat = 0, blue: SupportsFloat = 0) None#

Control the bottom left RGB LED color

Parameters:
  • red (float) – the value of the red channel

  • green (float) – the value of the green channel

  • blue (float) – the value of the blue channel

set_led_bottom_left_i(red: SupportsInt = 0, green: SupportsInt = 0, blue: SupportsInt = 0) None#

Control the bottom left RGB LED color

Parameters:
  • red (int) – the value of the red channel between 0 and 31

  • green (int) – the value of the green channel between 0 and 31

  • blue (int) – the value of the blue channel between 0 and 31

set_led_bottom_right(red: SupportsFloat = 0, green: SupportsFloat = 0, blue: SupportsFloat = 0) None#

Control the bottom right RGB LED color

Parameters:
  • red (float) – the value of the red channel

  • green (float) – the value of the green channel

  • blue (float) – the value of the blue channel

set_led_bottom_right_i(red: SupportsInt = 0, green: SupportsInt = 0, blue: SupportsInt = 0) None#

Control the bottom right RGB LED color

Parameters:
  • red (int) – the value of the red channel between 0 and 31

  • green (int) – the value of the green channel between 0 and 31

  • blue (int) – the value of the blue channel between 0 and 31

set_led_buttons(index: SupportsInt, value: SupportsFloat) None#

Control the four button LEDs

Parameters:
  • index (int) – the index of the LED. Set to -1 to control all LEDs.

  • value (float) – the desired intensity between 0 and 1.

set_led_buttons_i(index: SupportsInt, value: SupportsInt) None#

Control the four button LEDs

Parameters:
  • index (int) – the index of the LED. Set to -1 to control all LEDs.

  • value (int) – the desired intensity between 0 and 31.

set_led_circle(index: SupportsInt, value: SupportsFloat) None#

Control the 8 circle LEDs

Parameters:
  • index (int) – the index of the LED. Set to -1 to control all LEDs.

  • value (float) – the desired intensity between 0 and 1.

set_led_circle_i(index: SupportsInt, value: SupportsInt) None#

Control the 8 circle LEDs

Parameters:
  • index (int) – the index of the LED. Set to -1 to control all LEDs.

  • value (int) – the desired intensity between 0 and 31.

set_led_left_blue(value: SupportsFloat) None#

Control the left blue LEDs

Parameters:

value (float) – the desired intensity between 0 and 1.

set_led_left_blue_i(value: SupportsInt) None#

Control the left blue LEDs

Parameters:

value (int) – the desired intensity between 0 and 31.

set_led_left_red(value: SupportsFloat) None#

Control the left red LEDs

Parameters:

value (float) – the desired intensity between 0 and 1.

set_led_left_red_i(value: SupportsInt) None#

Control the left red LEDs

Parameters:

value (int) – the desired intensity between 0 and 31.

set_led_prox(index: SupportsInt, value: SupportsFloat) None#

Control the 8 proximity LEDs

Parameters:
  • index (int) – the index of the LED. Set to -1 to control all LEDs.

  • value (float) – the desired intensity between 0 and 1.

set_led_prox_i(index: SupportsInt, value: SupportsInt) None#

Control the 8 proximity LEDs

Parameters:
  • index (int) – the index of the LED. Set to -1 to control all LEDs.

  • value (int) – the desired intensity between 0 and 31.

set_led_right_blue(value: SupportsFloat) None#

Control the right blue LEDs

Parameters:

value (float) – the desired intensity between 0 and 1.

set_led_right_blue_i(value: SupportsInt) None#

Control the right blue LEDs

Parameters:

value (int) – the desired intensity between 0 and 31.

set_led_right_red(value: SupportsFloat) None#

Control the right red LEDs

Parameters:

value (float) – the desired intensity between 0 and 1.

set_led_right_red_i(value: SupportsInt) None#

Control the right red LEDs

Parameters:

value (int) – the desired intensity between 0 and 31.

set_led_top(red: SupportsFloat = 0, green: SupportsFloat = 0, blue: SupportsFloat = 0) None#

Control the top RGB LED color

Parameters:
  • red (float) – the value of the red channel

  • green (float) – the value of the green channel

  • blue (float) – the value of the blue channel

set_led_top_i(red: SupportsInt = 0, green: SupportsInt = 0, blue: SupportsInt = 0) None#

Control the top RGB LED color

Parameters:
  • red (int) – the value of the red channel between 0 and 31

  • green (int) – the value of the green channel between 0 and 31

  • blue (int) – the value of the blue channel between 0 and 31

World#

class World#

Bases: pybind11_object

The world is the container of all objects and robots. It is either

  • a rectangular arena with walls at all sides:

    World(width: float, height: float, ...)
    
  • a circular area with walls:

    World(width: float, ...)
    
  • or an infinite surface:

    World()
    
Parameters:
  • width (float) – The rectangular world width in centimeters

  • height (float) – The rectangular world height in centimeters

  • radius (float) – The circular world radius in centimeters

  • seed (int) – The random seed

  • walls_color (Color) – Optional wall color, default is Color.gray

Example:

import pyenki

world = World()
thymio = Thymio2()
world.add_object(thymio)
wall = RectangularObject(l1=10, l2=50, height=5, mass=1,
                                color=Color(0.5, 0.3, 0.3))
world.add_object(wall)
# Run 100 times a 0.1 s long simulation step
for _ in range(100):
    world.step(0.1)
objects#

The list of all objects

Type:

list[PhysicalObject]

robots#

The list of all robots

Type:

list[Robot]

static_objects#

The list of all objects that are not robots

Type:

list[PhysicalObject]

control_step_callback#

A function called at each update step.

Type:

Callable[[World, float], None] | None

random_seed#

The random seed

__init__(*args, **kwargs)#

Overloaded function.

  1. __init__(seed: typing.SupportsInt = 0) -> None

  2. __init__(width: typing.SupportsFloat, height: typing.SupportsFloat, walls_color: Color = Color(r=0.5, g=0.5, b=0.5, a=1.0), seed: typing.SupportsInt = 0) -> None

  3. __init__(radius: typing.SupportsFloat, walls_color: Color = Color(r=0.5, g=0.5, b=0.5, a=1.0), seed: typing.SupportsInt = 0) -> None

add_object(object: PhysicalObject) None#

Add an object to the simulation.

Parameters:

object (PhysicalObject) – the object to add.

copy_random_generator(arg0: World) None#
remove_object(object: PhysicalObject) None#

Remove an object from the simulation.

Parameters:

object (PhysicalObject) – the object to remove.

render(camera_reset: bool = False, camera_position: Vector = array([0., 0.]), camera_altitude: SupportsFloat = 0.0, camera_yaw: SupportsFloat = 0.0, camera_pitch: SupportsFloat = 0.0, camera_is_ortho: bool = False, walls_height: SupportsFloat = 10.0, width: SupportsFloat = 640, height: SupportsFloat = 360) numpy.ndarray#

Render a world to an RGB image array.

Parameters:
  • camera_reset (bool) – whether to set the camera in the default pose.

  • camera_position (Vector) – the horizontal position of the camera.

  • camera_altitude (float) – the vertical position of the camera.

  • camera_yaw (float) – the camera rotation around the vertical axis.

  • camera_pitch (float) – the camera vertical rotation.

  • camera_is_ortho (bool) – whether the camera uses an orthographic projection.

  • walls_height (float) – the height of the world boundary in cm.

  • width (int) – the width of the image in pixels.

  • height (int) – the height of the image in pixels.

Returns:

An array of shape (height, width, 3) and type uint8.

Return type:

numpy.ndarray[tuple[int, int, int], numpy.dtype[numpy.uint8]]

run(steps: SupportsInt = 1, time_step: SupportsFloat = 0.03333333333333333, physics_oversampling: SupportsInt = 3, termination: Callable[[World], bool] | None = None, callback: Callable[[World], None] | None = None) None#

Run a simulation.

Parameters:
  • steps (int) – the number of steps.

  • time_step (float) – the time step.

  • physics_oversampling (int) – the number of times the physics is updated per step to get a more fine-grained physical simulation compared to the sensor-motor loop.

  • termination (Callable[[World], bool] | None) – an optional function that terminates the simulation when it returns True.

  • callback (Callable[[World], None] | None) – An additional callback executed at each simulation step.

run_in_viewer(fps: SupportsFloat = 30, time_step: SupportsFloat = 0, factor: SupportsFloat = 1, helpers: bool = True, camera_reset: bool = False, camera_position: Vector = array([0., 0.]), camera_altitude: SupportsFloat = 0.0, camera_yaw: SupportsFloat = 0.0, camera_pitch: SupportsFloat = 0.0, camera_is_ortho: bool = False, walls_height: SupportsFloat = 10.0, duration: SupportsFloat = 0) None#

Render a world to an RGB image array.

Parameters:
  • fps (float) – The framerate of the viewer in frames per second.

  • time_step (float) – The simulation time step in seconds.

  • factor (bool) – The real-time factor. If larger than one, the simulation will run faster then real-time.

  • helpers (bool) – Whether to display the helpers widgets.

  • camera_reset (bool) – whether to set the camera in the default pose.

  • camera_position (Vector) – the horizontal position of the camera.

  • camera_altitude (float) – the vertical position of the camera.

  • camera_yaw (float) – the camera rotation around the vertical axis.

  • camera_pitch (float) – the camera vertical rotation.

  • camera_is_ortho (bool) – whether the camera uses an orthographic projection.

  • walls_height (float) – the height of the world boundary in cm.

  • duration (float) – duration of the simulation in simulated time. Negative values are interpreted as infinite duration.

save_image(path: str, camera_reset: bool = False, camera_position: Vector = array([0., 0.]), camera_altitude: SupportsFloat = 0.0, camera_yaw: SupportsFloat = 0.0, camera_pitch: SupportsFloat = 0.0, camera_is_ortho: bool = False, walls_height: SupportsFloat = 10.0, width: SupportsFloat = 640, height: SupportsFloat = 360) None#

Render a world to an RGB image array.

Parameters:
  • path (string) – The file path where to save the image.

  • camera_reset (bool) – whether to set the camera in the default pose.

  • camera_position (Vector) – the horizontal position of the camera.

  • camera_altitude (float) – the vertical position of the camera.

  • camera_yaw (float) – the camera rotation around the vertical axis.

  • camera_pitch (float) – the camera vertical rotation.

  • camera_is_ortho (bool) – whether the camera uses an orthographic projection.

  • walls_height (float) – the height of the world boundary in cm.

  • width (int) – the width of the image in pixels.

  • height (int) – the height of the image in pixels.

step(time_step: SupportsFloat, physics_oversampling: SupportsInt = 1) None#

Simulate a timestep

Parameters:
  • time_step (float) – the update timestep in seconds, should be below 1 (typically .02-.1)

  • physics_oversampling (int) – the amount of time the physics is run per step, as usual collisions require a more precise simulation than the sensor-motor loop frequency

WorldView#

class WorldView#

Bases: pybind11_object

A QOpenGLWidget that displays the world.

Parameters:
  • world (World | None) – The world to display.

  • fps (float) – The framerate of the viewer in frames per second.

  • update_world (bool) – Whether to trigger world updates before redrawing.

  • time_step (float) – The simulation time step in seconds.

  • factor (bool) – The real-time factor. If larger than one, the simulation will run faster then real-time.

  • helpers (bool) – Whether to display the helpers widgets.

  • camera_reset (bool) – whether to set the camera in the default pose.

  • camera_position (Vector) – the horizontal position of the camera.

  • camera_altitude (float) – the vertical position of the camera.

  • camera_yaw (float) – the camera rotation around the vertical axis.

  • camera_pitch (float) – the camera vertical rotation.

  • camera_is_ortho (bool) – whether the camera uses an orthographic projection.

  • walls_height (float) – the height of the world boundary in cm.

Example without PyQt:

>>> import pyenki
>>>
>>> world = World(radius=100)
>>> epuck = EPuck(camera=False)
>>> epuck.left_wheel_target_speed = 10.0
>>> epuck.set_led_ring(True)
>>> world.add_object(epuck)
>>> # setup Qt: needs to be called before creating the first view
>>> init_ui()
>>> viewer = WorldView(world)
>>> viewer.show()
>>> viewer.start_updating_world(0.1)
>>> # executes the Qt runloop for a while
>>> run_ui(duration=10)

Example with PyQt (composition of two views of the same world):

>>> import pyenki
>>> from PyQt6.QtCore import QCoreApplication, Qt
>>> from PyQt6.QtWidgets import QWidget, QApplication, QHBoxLayout
>>>
>>> # replaces init_ui(): needs to be called before creating any widget
>>> QCoreApplication.setAttribute(Qt.ApplicationAttribute.AA_ShareOpenGLContexts),
>>> app = QApplication([])
>>>
>>> world = World(radius=100)
>>> epuck = EPuck(camera=False)
>>> epuck.left_wheel_target_speed = 10.0
>>> epuck.set_led_ring(True)
>>> world.add_object(epuck)
>>> viewer_1 = WorldView(world, camera_position=(-20, -20), camera_altitude=20)
>>> viewer_1.point_camera(target_position=(0, 0), target_altitude=5)
>>> viewer_2 = WorldView(world, helpers=False, camera_altitude=30, camera_is_ortho=True)
>>> viewer_2.camera_is_ortho = True
>>> window = QWidget()
>>> hbox = QHBoxLayout(window)
>>> window.resize(960, 320)
>>> hbox.addWidget(viewer_1.widget)
>>> hbox.addWidget(viewer_2.widget)
>>> window.show()
>>> viewer_1.start_updating_world(0.1)
>>> app.exec()
world#

the world to display.

Type:

World | None

camera_position#

The camera horizontal position.

Type:

Vector

camera_altitude#

The camera vertical position.

Type:

float

camera_yaw#

the camera rotation around the vertical axis.

Type:

float

camera_pitch#

the camera vertical rotation.

Type:

float

camera_pose#

the camera pose as (position, altitude, yaw, pitch).

Type:

tuple[Vector, float, float, float]

camera_is_ortho#

whether the camera uses an orthographic projection.

Type:

bool

walls_height#

the height of the world boundary in cm (readonly).

Type:

float

tracking#

whether tracking is active.

Type:

bool

helpers#

whether to display the helpers widgets.

Type:

bool

image#

the currently rendered image.

Type:

numpy.ndarray[tuple[int, int, int], numpy.dtype[numpy.uint8]]

widget#

this view sip-wrapped so to be manipulable by PyQt.

Type:

QOpenGLWidget

__init__(world: World = None, fps: SupportsFloat = 30, update_world: bool = False, time_step: SupportsFloat = 0, factor: SupportsFloat = 1, helpers: bool = True, camera_reset: bool = False, camera_position: Vector = array([0., 0.]), camera_altitude: SupportsFloat = 0.0, camera_yaw: SupportsFloat = 0.0, camera_pitch: SupportsFloat = 0.0, camera_is_ortho: bool = False, walls_height: SupportsFloat = 10.0) None#
hide() None#

Hide the view

move_camera(target_position: Vector, target_altitude: SupportsFloat = 0, target_distance: SupportsFloat = 30, yaw: SupportsFloat | None = None, pitch: SupportsFloat | None = None) None#

Move the camera so to point towards the target.

Parameters:
  • target_position (Vector) – The target horizontal position in cm.

  • target_altitude (float) – The target vertical position in cm.

  • target_distance (float) – The distance to the target.

  • yaw (float | None) – Optionally sets the camera yaw.

  • pitch (float | None) – Optionally sets the camera pitch.

point_camera(target_position: Vector, target_altitude: SupportsFloat = 0, position: Vector | None = None, altitude: SupportsFloat | None = None) None#

Rotate the camera so to point towards the target.

Parameters:
  • target_position (Vector) – The target horizontal position in cm.

  • target_altitude (float) – The target vertical position in cm.

  • position (Vector | None) – Optionally sets the camera horizontal position in cm.

  • altitude (float | None) – Optionally sets the camera vertical position in cm.

reset_camera() None#

Reset the camera pose

save_image(arg0: str) None#

Save the image to a file.

Parameters:

path (string) – file path where to save the image.

show() None#

Shows the view

start_updating_world(time_step: SupportsFloat = 0, factor: SupportsFloat = 1) None#

Start updating the world before redrawing the view.

Parameters:
  • time_step (float) – The simulation time step in seconds.

  • factor (bool) – The real-time factor. If larger than one, the simulation will run faster then real-time.

stop_updating_world() None#

Stop updating the world before redrawing the view.

Helpers#

Remote buffer#

class EnkiRemoteFrameBuffer#

Bases: RemoteFrameBuffer

Renders a world in a jupyter notebook by calling World.render().

world#

The world to display

Type:

World | None

camera_position#

The camera position

Type:

Vector

camera_altitude#

the vertical position of the camera.

Type:

float

camera_yaw#

the camera rotation around the vertical axis.

Type:

float

camera_pitch#

the camera vertical rotation.

Type:

float

camera_is_ortho#

whether the camera uses an orthographic projection.

Type:

bool

Example:

>>> import pyenki
>>> world = World()
>>> from buffer import EnkiRemoteFrameBuffer
>>> view = EnkiRemoteFrameBuffer(world=world)
>>> view.move_camera(target_position=(0, 0), target_altitude=5,
                     camera_yaw=1, camera_pitch=-0.5, target_distance=40)
>>> view
__init__(world: World | None = None, camera_position: ndarray[tuple[Literal[2]], dtype[float64]] = array([0., 0.]), camera_altitude: float = 30, camera_yaw: float = 0, camera_pitch: float = -1.5707963267948966, camera_is_ortho: bool = False)#

Constructs a new instance.

Parameters:
  • world – The world

  • camera_position – The camera position

  • camera_altitude – The camera altitude

  • camera_yaw – The camera yaw

  • camera_pitch – The camera pitch

  • camera_is_ortho – Whether the camera uses an orthographic projection

get_frame()#

Return image array for the next frame.

Subclasses should overload this method. It is automatically called during a draw. The returned numpy array must be NxM (grayscale), NxMx3 (RGB) or NxMx4 (RGBA). May also return None to cancel the draw.

handle_event(event)#

Handle an incoming event.

Subclasses should overload this method. Events include widget resize, mouse/touch interaction, key events, and more. An event is a dict with at least the key event_type. See jupyter_rfb.events for details.

move_camera(target_position: ndarray[tuple[Literal[2]], dtype[float64]], target_altitude: float = 0, target_distance: float = 100, camera_yaw: float | None = None, camera_pitch: float | None = None) None#

Move the camera so to point towards the target. Same interface as WorldView.move_camera().

Parameters:
  • target_position – The target horizontal position in cm.

  • target_altitude – The target vertical position in cm.

  • target_distance – The distance to the target.

  • camera_yaw – Optionally sets the camera yaw.

  • camera_pitch – Optionally sets the camera pitch.

point_camera(target_position: ndarray[tuple[Literal[2]], dtype[float64]], target_altitude: float = 0, position: ndarray[tuple[Literal[2]], dtype[float64]] | None = None, altitude: float | None = None) None#

Rotate the camera so to point towards the target. Same interface as WorldView.point_camera().

Parameters:
  • target_position – The target horizontal position in cm.

  • target_altitude – The target vertical position in cm.

  • position – Optionally sets the camera position.

  • altitude – Optionally sets the camera altitude.

request_draw_sync() None#

Similar to jupyter_rfb.RemoteFrameBuffer.request_draw() but works outside of an even loop.

run(time_step: float, duration: float, factor: float = 1, synch: Collection[EnkiRemoteFrameBuffer] = ()) None#

Runs a simulation and updates the view for a while.

Parameters:
  • time_step – The time step of the simulation

  • duration – The duration of the simulation

  • factor – The real-time factor. If larger than one, the simulation will run faster then real-time.

  • synch – Which other buffers should be redrawn.

async run_async(time_step: float, duration: float, factor: float = 1, synch: Collection[EnkiRemoteFrameBuffer] = ()) None#

Runs a simulation and updates the view for a while.

Parameters:
  • time_step – The time step of the simulation

  • duration – The duration of the simulation

  • factor – The real-time factor. If larger than one, the simulation will run faster then real-time.

  • synch – Which other buffers should be redrawn.

tick(fps: float) None#

Updates the view and sleeps for a while.

async tick_async(fps: float) None#

Updates the view and sleeps for a while.

Video#

make_video(world: World, time_step: float, duration: float, factor: float = 1.0, **kwargs: Any)#

Generate a video by simulating the world for a while.

Parameters:
  • world – The world to be simulated

  • time_step – The time step of the simulation

  • duration – The duration of the simulation

  • factor – The real-time factor. If larger than one, it will speed up the video.

  • kwargs – The keywords arguments passed to World.render()

Returns:

The video clip.