Hello, I’m Dhanush Kumar Antharam

I'm |

Data Engineer, Data Analyst, and Software Engineer based in the United States, currently working at ChekOut.AI building AI-driven systems and backend infrastructure. With over three years of experience, I have worked on data automation, real-time analytics, and scalable system development at companies like Juspay Technologies and King Machine. My expertise spans Python, SQL, Golang, and cloud platforms such as GCP and AWS, with a strong focus on workflow automation, distributed systems, and AI-powered solutions. Passionate about data-driven innovation and system optimization, I enjoy building efficient, reliable, and intelligent systems that solve complex real-world challenges.

Learn More

About Me

Your Name Photo

Software Engineer with a strong focus on backend systems, data engineering, and AI-driven automation, based in the United States. I have over 3 years of experience working on data automation, real-time analytics, ETL pipelines, and scalable system architectures across fintech and manufacturing domains. I most recently worked as an Engineering Intern at King Machine, where I developed an automated file management system to streamline workflows, optimize storage usage, and integrate Git-based version control—improving operational efficiency and reducing storage overhead. Previously, at Juspay Technologies, I built and deployed real-time data analysis systems that reduced processing time by 98% and lowered infrastructure costs by 80%, giving me strong exposure to high-scale, reliability-critical systems. My technical experience spans Python, SQL, Golang, and cloud platforms such as GCP, AWS, and Docker, with a focus on building reliable, production-ready systems. I have also worked as a Machine Learning Engineer, developing AI models for real-time monitoring, business analytics, and anomaly detection. My research on abnormal activity detection using deep learning was published in Springer Nature LNNS and presented at WORLD S4 2021. I enjoy solving complex engineering problems, optimizing systems, and building intelligent software that scales. I’m always eager to learn, experiment, and contribute to impactful products at the intersection of software engineering, data, and AI.

Previously, I worked at Juspay Technologies where I built real-time data analysis systems that reduced processing time by 98% and lowered infrastructure costs by 80%. I also worked at King Machine on workflow automation and storage optimization. I enjoy building reliable production systems at the intersection of software engineering, data, and AI.

Projects

Multi-Processor Cache Coherence Simulation

In this project, a cache coherence protocol was developed for a four-processor system, each equipped with a 64KB Level-1 cache and a shared memory architecture. Both snooping and directory-based coherence mechanisms were implemented to ensure a 1-cycle latency for cache coherence operations. A key aspect of the system's design was the development of an optimized Bus Arbiter, which precisely managed access to the System MemBus, improving performance and communication efficiency. The project showcased expertise in C++ programming, algorithm design, and system-level architecture, demonstrating a deep understanding of memory consistency models and processor communication.

Estimating Vehicle Speed on Roadways Using RNNs & Transformers

This project focused on real-time vehicle speed estimation using deep learning techniques, particularly RNNs, LSTMs, GRUs, and Transformers, to process video data. The model achieved 94.25% accuracy on the VS13 dataset and 78.62% on the I24_3D dataset using LSTMs, surpassing other traditional approaches. The Transformer-based architecture demonstrated superior performance by effectively handling long-term dependencies, achieving 90% accuracy. Root Mean Square Error (RMSE) was minimized to 3.96 for VS13 and 10.99 for I24_3D, ensuring precise speed estimation. The system was designed to be scalable and non-intrusive, leveraging existing video surveillance infrastructure for real-world traffic monitoring applications.

Navigating Uncertainty: Adaptive Decision-Making

This research-driven project explored adaptive decision-making techniques using First-Visit Monte Carlo, Semi-Gradient Temporal Difference (TD), and Deep Q-Networks (DQN). The implemented models optimized resource management and goal navigation strategies, focusing on balancing reward maximization with efficient resource utilization. Policies were strategically designed to subtly guide AI decision-making, creating an illusion of efficiency while masking the true underlying objectives. The project demonstrated a strong foundation in reinforcement learning, policy optimization, and real-time decision-making under uncertainty.

Abnormal Activity Detection Using Deep Learning

A deep learning-based anomaly detection system was developed using Spatio-Temporal Autoencoders and C3D feature extraction techniques to identify unusual activities in video sequences. The model achieved a notable 78% accuracy in detecting anomalies, improving the efficiency of feature extraction for event classification. Performance was evaluated through precision, recall, and F1-score, ensuring robust anomaly detection capabilities. The findings were presented at the World S4 2021 Conference, and a research paper detailing the methodology and results was published, highlighting its implications for real-world surveillance applications.

Age Group Prediction from Voice Data

This project involved age classification using voice data by leveraging both classical machine learning and deep learning techniques. A feature extraction process was implemented, yielding 23 features for traditional ML models, such as Logistic Regression, Support Vector Classification (SVC), and k-Nearest Neighbors (KNN), while neural networks (FNN, CNN) utilized 191 extracted features. The comparative analysis of classical and deep learning models was conducted using accuracy, precision, and recall metrics, providing valuable insights into the effectiveness of different modeling approaches for age prediction based on vocal characteristics.

Self-Driving Car Using Computer Vision and Raspberry Pi

A computer vision-based autonomous navigation system was designed for real-time lane detection and curve tracking. The system employed edge detection, perspective transformation, and polynomial fitting techniques to enhance lane-following precision. A Raspberry Pi with an integrated camera was used for real-time road navigation, enabling seamless integration into embedded systems. The model was optimized for adaptability to various road conditions, ensuring robust autonomous driving capabilities with improved accuracy and responsiveness.

Job Aggregator Tool Using Selenium

A web-based job aggregation tool was developed to compile job listings from platforms such as LinkedIn, Naukri, and Indeed based on user-defined criteria. The tool utilized web scraping techniques to extract relevant job details, including title, company, location, and application links. A RESTful API was implemented using Flask to manage user queries efficiently and deliver formatted job results. Error-handling mechanisms were incorporated to ensure reliability and robust performance. The application was successfully deployed on Heroku, providing scalability and high availability. Additionally, a user-friendly interface was designed for seamless navigation and effective job search filtering.

Menu Scraper Using Selenium

This project involved developing a web scraping tool using Selenium to extract restaurant menus from platforms such as Swiggy and Zomato. The scraped data was cleaned and structured for easy analysis, with location-based filtering options integrated to assist users in identifying restaurants that met their specific preferences. The scraper was optimized to adapt to website layout changes, ensuring consistent and reliable performance. Additionally, data export options (e.g., CSV) were implemented for external analysis and business insights.

Spam Detection System Using Sentiment Analysis

As part of an academic internship project, a spam detection system was developed using sentiment analysis techniques to identify negative reviews within a dataset. The system provided business analysts with valuable insights to enhance customer engagement and revenue optimization. The project involved natural language processing (NLP) techniques, text classification, and sentiment analysis, showcasing expertise in extracting meaningful patterns from textual data.

Line-Following Robot Using Arduino and IR Sensors

A line-following robotic system was designed and implemented using an Arduino Uno microcontroller and infrared (IR) sensors. The robot autonomously navigated predefined paths by detecting and following line patterns, making it applicable for industrial automation and railway systems. The project demonstrated proficiency in embedded systems, sensor integration, and real-time control algorithms, contributing to the development of efficient automated navigation solutions.

Publications

Abnormal Activity Detection Using Deep Learning

This research, published in the Springer Nature LNNS series (5th International Conference, World S4 2021), explores Spatio-Temporal Autoencoders and C3D feature extraction techniques for anomaly detection in video sequences, achieving 78% accuracy. The system automates surveillance by identifying incidents such as accidents, burglaries, and explosions. A 3D CNN-based autoencoder reconstructed frames with minimal loss (8%) and 71% accuracy, while C3D feature extraction further improved detection performance. Designed for real-time monitoring, this model integrates with CCTV cameras in banks, roads, and crowded areas to enhance security and reduce manual workload.

View Publication

Skills

My Technical Proficiencies Across Categories

Programming Languages & Databases

  • R
  • Python
  • Golang
  • JavaScript
  • HTML
  • PostgreSQL
  • BigQuery
  • Clickhouse

Agentic AI

  • LLM Integration (OpenAI, Claude, Gemini)
  • AI Agent Design & Orchestration
  • Prompt Engineering
  • Tool Calling & Workflow Automation
  • RAG Systems
  • Vector Databases (Pinecone, Chroma)
  • Latency & Cost Optimization
  • Production AI Evaluation & Monitoring

Core Competencies

  • Data Analysis
  • Machine Learning (scikit-learn, etc.)
  • Deep Learning (PyTorch, TensorFlow)
  • Web Scraping (Selenium)
  • SQL
  • Containerization (Docker)
  • CI/CD Pipelines (Jenkins)
  • Agile SDLC

Platforms & Integrations

  • Langflow
  • N8N
  • Creatio
  • Klaviyo
  • Shopify
  • Meta APIs
  • Stripe

Tools & Other Skills

  • REST API Development
  • Containerization (Docker)
  • CI/CD
  • Power BI
  • Google Cloud Platform
  • Tableau
  • Linux
  • Windows
  • Git
  • Bitbucket

Experience & Education

My Professional & Academic Journey

Software Engineer

ChekOut.AI

Jul 2025 - Present

Responsibilities

  • Designed a one-click agent deployment system where users upload custom knowledge bases and prompts; the platform automatically provisions a dedicated AI agent, indexes content, and serves real-time, context-aware responses.
  • Built an end-to-end agentic AI platform with automated ingestion, chunking, embeddings, vector indexing, and RAG-based retrieval to generate tenant-specific agents.
  • Designed an Agent Deployment Engine supporting document ingestion, embedding generation, vector indexing, and automated Cloud Run microservice spin-up per agent.

Engineering Intern

Jan 2025 – Dec 2025

Responsibilities

  • Developed an automated file management system at King Machine using Python to streamline organizational workflows and optimize memory usage, reducing storage requirements (e.g., cutting 1TB of excess data).
  • Built automated notification systems, minimizing manual effort for employees and improving overall efficiency.
  • Integrated the workflow with Git, using a bare repository for seamless version control.
  • Enhanced both operational productivity and data management, focusing on scalable solutions for better workflow automation and memory optimization.

Engineering Intern

Oct 2023 – Jan 2025

Responsibilities

  • Developed a commissioning application using R Shiny for the front end and MongoDB for the backend, deploying it on shinyapps.io to enhance operational efficiency.
  • Followed the Agile SDLC methodology, ensuring iterative progress through regular feedback and continuous improvements.
  • Implemented interactive features and real-time data management to streamline commissioning tasks and enhance user experience.
  • Analyzed and optimized the utilization of facilities equipment such as boilers, coolers, and AHUs, contributing to improved resource management.
  • Leveraged HTML, CSS, JavaScript, and R Shiny to develop a cost-effective solution, reducing operational costs by 50%.

Senior Data Analyst

Jan 2022 – Jun 2023

Responsibilities

  • Developed an automated real-time data analysis system using R, BigQuery (GCP), and ClickHouse, reducing analysis time and incorporating Intelligent Dynamic Recommendations.
  • Implemented customized alerts and reports sent via Slack and email to enhance monitoring and decision-making.
  • Designed and implemented ETL pipelines for seamless daily updates to BigQuery.
  • Developed a Golang-based server using the Echo framework to automate R and Python processes efficiently.
  • Deployed by containerizing the solution using Docker, Jenkins, GCP, and AWS ECR for scalability and streamlined deployment.
  • Integrated a Process Tracker for scheduled tasks, automated email reporting, and backend issue resolution alerts.
  • Optimized infrastructure efficiency, achieving an 80% reduction in infrastructure costs.

Machine Learning Engineer

Jul 2021 – Dec 2021

Responsibilities

  • Developed a Business Genie model using Computer Vision and Deep Learning, achieving 81% accuracy in real-time employee and customer movement tracking, including person tracking and automated entry-exit record updates in the store's database.
  • Deployed the model as part of a Django application on AWS Lambda, reducing processing time by 40% and ensuring scalable, efficient cloud-based operations.
  • Developed a customer hotspot identification model, enhancing the accuracy of engagement zone detection and optimizing store layout based on foot traffic analysis.
  • Designed a Deep Learning model for skin cancer detection with 83% accuracy, contributing to AI-driven healthcare solutions for early diagnosis and improved patient care.

Master’s in Computer Engineering

Aug 2023 – May 2025

Bachelor’s in Electronics & Communication Engineering

JNTU Hyderabad

Jul 2017 – Aug 2021

Contact

Let’s work together! Fill out the form below with your project idea.