Process Kexxu Eye ouput data with Python
With Kexxu Eye you can upload your recordings to the cloud editor. In the cloud editor there are a bunch of pre-made tools, with everything you need to do an eye tracking experiment.
If you don’t want to use the cloud tools, you can also download the video file and the data file swith the Kexxu Devices app, and process the data yourself. Here we show you the basics of that, using Python.
Data format
A Kexxu Eye recording consists of a video and a data file. This is what that data file looks like.
devices/openeye-123-E35ZnkVSPg/eyetracking 1675432733440 {"eye_left_pos_x":0.0,"eye_left_pos_y":0.0,"eye_right_pos_x":0.0,"eye_right_pos_y":0.0,"eye_top_pos_x":0.0,"eye_top_pos_y":0.0,"pupil_pos_x":0.0,"pupil_pos_y":0.2000,"pupil_rel_pos_x":0.0498,"pupil_rel_pos_y":-0.62388,"timestamp_ms":"1675432733434"}
devices/openeye-123-E35ZnkVSPg/eyetracking 1675432733463 {"eye_left_pos_x":0.0,"eye_left_pos_y":0.0,"eye_right_pos_x":0.0,"eye_right_pos_y":0.0,"eye_top_pos_x":0.0,"eye_top_pos_y":0.0,"pupil_pos_x":0.0,"pupil_pos_y":0.20000,"pupil_rel_pos_x":0.04984,"pupil_rel_pos_y":-0.64640,"timestamp_ms":"1675432733461"}
devices/openeye-123-E35ZnkVSPg/eyetracking 1675432733536 {"eye_left_pos_x":0.0,"eye_left_pos_y":0.0,"eye_right_pos_x":0.0,"eye_right_pos_y":0.0,"eye_top_pos_x":0.0,"eye_top_pos_y":0.0,"pupil_pos_x":0.0,"pupil_pos_y":0.20000,"pupil_rel_pos_x":0.0498,"pupil_rel_pos_y":-0.6464,"timestamp_ms":"1675432733535"}
devices/openeye-123-E35ZnkVSPg/eyetracking 1675432733673 {"eye_left_pos_x":0.0,"eye_left_pos_y":0.0,"eye_right_pos_x":0.0,"eye_right_pos_y":0.0,"eye_top_pos_x":0.0,"eye_top_pos_y":0.0,"pupil_pos_x":0.0,"pupil_pos_y":0.18888,"pupil_rel_pos_x":0.06225,"pupil_rel_pos_y":-0.66899,"timestamp_ms":"1675432733671"}
The data file is organized in using event streaming, with one event per line. The first entry is the event type. Here always eyetracking but this can also be gps for example, or a button-press in the app signaling the start of a scenario.
The second entry is the unix timestamp, signaling the time of the event.
The third entry is the payload of the event, which is json. The json fields are different for each event type.
For every frame in the video there is exactly one eyetracking event, though the datafile does not start and stop exactly at the same time the video recording starts and stops, but can be off 2 or 3 frames.
Download example data file and video.
To convert the eye locations to screen coordinates, you can use the following Python function:
def read_kexxu_eye_data(path: str) -> List:
locs = []
with open(path, "r") as f:
lines = f.readlines()
for line in lines:
parts = line.split(" ")
if parts[0].endswith("/eyetracking"):
js = json.loads(" ".join(parts[2:]))
rx = js["pupil_rel_pos_x"]
ry = js["pupil_rel_pos_y"]
x = 640 - int((rx) * 640.0)
y = 360 + int((ry) * 360.0)
locs.append((x, y))
return locs
pupil_rel_pos_x and pupil_rel_pos_y are the distance of the gaze from the center of the video, from -1 to 1. The video is 1280×720 pixels for the Kexxu Eye v1. Note that if you import the video from Tobii G2 or G3 into the editor, the same function will work, but you will have to use a different screen resolution, because the video is a different format.
If you have any questions using this python fuction, please don’t hesitate to send me an email!