Profile Picture

Daniel Eneh

Robotics/Computer Vision/Software Engineer

Welcome to my Portfolio! I am a passionate Robotics Engineer and Computer Vision Specialist with a strong foundation in robotics perception, localization, and navigation. My journey in the world of robotics has been driven by a deep fascination with the intersection of hardware and software, and a commitment to solving complex challenges in this exciting field. @Daniel

BlueROV2 Underwater Inspection & Digital Twin

A photo-realistic digital twin of an offshore wind farm built with BlueROV2, Unreal Engine and a ROS plugin to inspect underwater anodes. The system performs Visual SLAM and real-time 3D reconstruction using Nvidia nvblox, semantic segmentation of anodes and valves, stereo-to-point-cloud conversion, and depth estimation deployed on edge with TensorRT. An end-to-end ROS-gRPC perception pipeline streams segmentation classes, object IDs and depth data to an LLM knowledge manager to enhance human-in-the-loop ROV autonomy.

BlueROV2 Underwater Inspection

Demo Videos

Self-Driving Car — 3D Object Detection & Sensor Fusion

Trained SSD MobileNet v2 and SSD ResNet50 v1 on the Waymo Dataset for object detection, selecting the best model for deployment. Also performed end-to-end 3D object detection from LiDAR point clouds using FPN ResNet and YOLO (Darknet), fused with Kalman filter sensor fusion on the Waymo Perception Dataset. (Udacity Self-Driving Car Nanodegree)

Generative AI for Computer Vision

Two projects combined in one repo: (1) An application that combines the Segment Anything Model (SAM) with Stable Diffusion to automatically generate a segmentation mask from a text prompt for any image. (2) A Style GAN used to generate synthetic training datasets. (Udacity Generative AI Course)

Construction Robot Perception Stack — RobuildX

Technical lead for the autonomous mobile robot team at RobuildX (stealth start-up). Designing the full perception stack to improve the autonomy of construction robots on active construction sites, including 3D scene understanding, obstacle avoidance, and real-time decision making.

PhysicsFlow — AI-Native Reservoir Simulation

Physics-Informed Neural Operator (FNO/PINO) surrogate platform for reservoir simulation and history matching. Features adaptive Ensemble Kalman Inversion (αREKI), a Hybrid RAG Knowledge Assistant, Reservoir Knowledge Graph, and a WPF desktop application for real-time well performance analysis and production forecasting.

UAV for Detection of Pipeline Faults

An intelligent drone equipped with LiDAR, RGB and depth cameras capable of 3D environment mapping, visual SLAM, semantic mapping, obstacle avoidance, and pipeline leak detection using YOLOv8 and Facebook's Detection Transformer. Motion and path planning algorithms are used to autonomously track the pipeline.

EKF Filter for Robot Localization and Navigation

The Extended Kalman Filter algorithm is employed to fuse visual odometry readings and GPS readings in order to achieve precise robot localization.. The Oxford Robot Car Dataset is used for this project. In robot localization, GPS alone isn't accurate due to local obstacles and limited precision. To improve accuracy, the Extended Kalman Filter fuse odometry and GPS data, reducing odometry drift and enhancing GPS positions for outdoor robotics.

Wireless Event Camera

The event camera capture changes in brightness asynchronously and in real time having the advantage of low power consumption, High Dynamic range as compared to conventional cameras. The Prophesee EVK3 and EVK4 event camera is designed to wirelessly transmits event frames via an image pipeline to an Nvidia Jetson remote workstation using a Raspberry Pi 4 companion computer. A 3D mount for the camera was designed using AUTOCAD.

Depth Estimation Using Lidar Measurement

In this project, a Lidar histogram is utilized to create a depth map of the environment using a straightforward match filter algorithm. The depth map aids in visualizing the distances of objects within a scene.