CICPS 2026 HACKATHON in collaboration with JUSENSE
Register Now
Overview
CICPS 2026 Hackathon in collaboration with JUSENSE is a theme-driven competition under the prestigious International Conference on Computational Intelligence and Cyber Physical Systems (CICPS 2026). The hackathon aims to leverage AI-powered Cyber-Physical Systems (CPS) to address real-world problems across three critical sectors — Transport, Health, and Agriculture.
Participants will propose and build innovative CPS solutions using AI, edge computing, and intelligent sensing for practical deployment.
Problem Statement
1. Transport: Road Surface Anomaly Detection using AI-Driven CPS
Participants are tasked with building a low-cost, camera-based CPS solution to detect and map road anomalies like potholes and cracks using deep learning and geospatial visualization.
Proposed Titles (One of the following could be selected):
- Intelligent Monitoring and Map-Visualization for Road Surface Anomalies/Damage using AI-Driven CPS in Low-Cost Setup
- Intelligent Monitoring and Map-Visualization for Road Surface Anomalies/Damage in Low-Cost Setup
- Intelligent Road Infrastructure Monitoring via Low-Cost Cyber-Physical Systems
- Smart Anomaly Mapping in Roads using Edge AI and CPS
- Detect and Map-Visualize Road Damage Using Low-Cost CPS
Background:
Maintaining safe and efficient road infrastructure is essential for urban sustainability, mobility, and public safety. However, road networks are subject to degradation over time, manifesting as anomalies/damage such as potholes, cracks, and surface irregularities. Manual inspection methods are resource-intensive, slow, and non-scalable, necessitating the development of intelligent, automated, and geo-aware systems for road condition monitoring.
Challenge:
Design and develop an AI-powered Cyber-Physical System (CPS) that can detect, classify, and geo-tag surface-level road anomalies/damages in urban environments. The system may use computational intelligence techniques with physical sensing modalities (e.g., camera images/videos, mobile sensor data, drone footage) to perform automated road surface anomalies/damage monitoring. Participants are encouraged to innovate in road surface anomaly detection, dataset creation or usage, model training, geospatial analysis, and visual dashboards for city-level anomaly mapping and reporting.
Scope for Innovation (Flexible Dimensions):
- Data Source: Smartphone images/videos or smartphone sensor data or drone images/videos or dashcam data or roadside camera data or any open road condition datasets
- Sensing Modality: RGB images or video streams or any GPS-tagged media
- Modeling Techniques: Deep learning (object detection), classical ML, or hybrid approaches
- Deployment: Cloud-based, edge AI, or federated setups
- Visualization: Interactive dashboards / web visualization
Sub Tasks:
- SubTask 1 (Annotated Dataset): Dataset (self-created or existing) with annotations.
- SubTask 2 (AI Model Development): AI model building for detecting road anomalies/damages in an image. This is an object detection problem that can detect and localize potholes, cracks and uneven surfaces in an image.
- SubTask 3 (Deployment – Cloud): Deploy the AI model in the cloud (or locally).
- SubTask 4 (Web Application): Create a simple web dashboard. Upon uploading geo-tagged images/videos/sensor data, the model will identify road anomalies/damages and visualize them in a web-based map.
Expected Deliverables:
- A trained AI model capable of identifying crop diseases from images
- Annotated dataset (open-source or field-collected) with appropriate labeling
- A deployed version of the model (cloud/mobile/edge) ready for real-time use
- A demo application (web or mobile) showing disease predictions and results
- Documentation (max 4 pages) or presentation (max 15 slides) covering:
1. Annotated dataset (self-created or existing)
2. Preprocessing, augmentation techniques
3. Model architecture, evaluation metrics (mAP@[IoU=0.5], Precision, Recall, F1-score, etc) and performance
4. Screenshots of the deployed app or dashboard
Example Dataset:
Participants may use existing open datasets or generate their own.
2. Agriculture: Crop Disease Detection via AI and Vision-based CPS
Build a system capable of detecting crop diseases using images captured via drones, smartphones, or IoT sensors, and visualize geo-tagged health maps.
Proposed Titles (One of the following could be selected):
- AI-Powered CPS for Crop Disease Detection and Visual Monitoring
- Smart Plant Health Monitoring Using Vision-Driven Cyber-Physical Systems
- Crop Disease Identification and Mapping via AI-Enabled CPS in Low-Cost Setup
- Intelligent Agriculture: Detect and Visualize Crop Diseases Using AI and CPS
- Edge-AI and Cyber-Physical Systems for Real-Time Crop Disease Detection
Background:
Crop diseases threaten agricultural productivity and global food security, causing significant economic losses. Early detection is vital for effective treatment and prevention. However, traditional practices relying on manual field inspections by experts are time-consuming, inconsistent, and unsuitable for large-scale farms. With the advancement of Cyber-Physical Systems (CPS) — integrating physical sensing (e.g., smartphone cameras, drones, IoT sensors) with intelligent AI models — we can develop low-cost, scalable, and automated solutions for crop disease detection and spatial monitoring.
Challenge:
Develop an AI-powered Cyber-Physical System that can detect and classify crop diseases from images captured in real farming environments. The system should be capable of operating with various physical inputs (leaf images from smartphones, drone images from plantations, or field cameras), process the data using deep learning or classical ML, and return results with location-specific insights. Participants are expected to explore vision-based disease detection, integrate it with physical data capture mechanisms, and build intuitive web/mobile interfaces for diagnosis, tracking, and decision-making support.
Scope for Innovation (Flexible Dimensions):
- Data Source: Public datasets or images/videos captured via smartphones, drones, or IoT devices in agricultural fields
- Sensing Modality: RGB images of leaves or crops, optionally geo-tagged using GPS-enabled devices
- Modeling Techniques: Deep learning (CNNs, Vision Transformers), classical ML models, ensemble techniques
- Deployment Platforms: Cloud-based APIs, mobile apps, or edge-AI enabled CPS (e.g., Jetson Nano, Raspberry Pi)
- Visualization: Real-time mobile/web dashboard with disease prediction and optional geo-tagged health maps
Sub Tasks:
- SubTask 1 (Annotated Dataset): Use or create a dataset with labeled disease categories. Collect images via physical devices (smartphone, drone) and annotate them with appropriate labels (e.g., crop type, disease name, severity).
- SubTask 2 (AI Model Development): Train an AI model to detect and classify crop diseases from plant images. Evaluate using Accuracy, Precision, Recall, F1-score, and Confusion Matrix.
- SubTask 3 (Deployment – Cloud): Integrate the model with a CPS-based deployment setup — cloud service, mobile app, or edge device — to support real-time, on-field inference.
- SubTask 4 (Web Application): Build a simple user interface (web/mobile) where users can upload or capture images. The app should display the disease type, severity, and (optionally) location of affected crops using a map-based visualization.
Expected Deliverables:
- A trained AI model capable of identifying crop diseases from field images
- Annotated dataset (existing or self-collected using CPS-based devices)
- A deployed solution (cloud/mobile/edge) for live predictions
- A web/mobile app that visualizes disease classification and optional geo-mapping
- Documentation (max 4 pages) or presentation (max 15 slides) covering:
1. Annotated dataset (self-created or existing)
2. Preprocessing, augmentation techniques
3. Model architecture, evaluation metrics (mAP@[IoU=0.5], Precision, Recall, F1-score, etc) and performance
4. Screenshots of the deployed app or dashboard
Example Dataset:
Participants may use existing open datasets or generate their own.
3. Health: AI-Powered CPS for Skin Disease Detection
Design a smart health-monitoring solution using AI models on skin images for early detection of dermatological conditions via mobile/web/edge-based CPS systems.
Proposed Titles (One of the following could be selected):
- AI-Powered CPS for Skin Disease Detection
- Vision-Driven Cyber-Physical System for Automated Skin Disease Identification
- Smart Health Monitoring: Skin Condition Detection Using AI and Edge-Enabled CPS
- Real-Time Skin Disease Classification and Mapping via Low-Cost CPS
- AI and CPS Integration for Early Detection of Dermatological Conditions
Background:
Skin diseases are among the most common health issues globally, ranging from mild infections to serious conditions such as melanoma. Early diagnosis is critical, but access to dermatologists is limited in many rural or under-resourced areas. Traditional diagnosis often requires visual inspection and expertise, which is not scalable or uniformly available. With the advent of Cyber-Physical Systems (CPS) - which integrate physical sensing (e.g., smartphone cameras, dermatoscopes) with intelligent AI-based decision-making - it is possible to build low-cost, deployable systems for skin disease screening and risk categorization. Such systems can assist frontline health workers and individuals in remote, automated skin disease detection and monitoring.
Challenge:
Develop an AI-powered Cyber-Physical System that can detect and classify skin diseases from images captured using consumer-grade devices (e.g., smartphones, USB cameras, or dermatoscopes). The system should work across different lighting conditions, skin tones, and disease types, and optionally provide location-aware insights for public health surveillance. Participants should design an end-to-end pipeline, from image acquisition and preprocessing to model inference and output visualization, integrated within a simple web/mobile app or edge-AI deployment scenario.
Scope for Innovation (Flexible Dimensions):
- Data Source: Use of public skin disease datasets or data captured via smartphone, dermatoscope, or any portable imaging device
- Sensing Modality: RGB skin images with optional metadata (e.g., age, gender, GPS)
- Modeling Techniques: CNNs, Vision Transformers, ensemble deep learning, classical ML hybrids
- Deployment Platforms: Cloud servers, Android/iOS apps, or edge-AI enabled devices (e.g., Raspberry Pi, Jetson Nano)
- Visualization: Diagnostic result display with prediction confidence, possible condition severity, and location-based heatmaps for community health tracking
Sub Tasks:
- SubTask 1 (Annotated Dataset): Use or create a labeled dataset of skin images categorized by disease type (e.g., eczema, psoriasis, melanoma). Annotate based on disease severity if available.
- SubTask 2 (AI Model Development): Train a classification model to detect and categorize skin conditions from images. Use evaluation metrics such as Accuracy, Precision, Recall, F1-score, and AUC.
- SubTask 3 (Deployment (Cloud/Mobile/Edge)): Deploy the trained model using a CPS setup- a smartphone app, cloud API, or embedded edge-AI device to enable real-time or offline inference.
- SubTask 4 (Web or Mobile Application): Create an intuitive web or mobile application to upload skin images. The interface should display predicted disease type, severity level (if applicable), and optional mapping for epidemiological insight.
Expected Deliverables:
- A trained AI model capable of identifying and classifying skin diseases
- Annotated dataset (public or self-collected via CPS setup)
- A working deployed system (cloud, mobile, or edge-AI based)
- A user-facing app that performs disease detection and optionally maps cases
- Documentation (max 4 pages) or presentation (max 15 slides) covering:
1. Annotated dataset (self-created or existing)
2. Preprocessing, augmentation techniques
3. Model architecture, evaluation metrics (mAP@[IoU=0.5], Precision, Recall, F1-score, etc) and performance
4. Screenshots of the deployed app or dashboard
Example Dataset:
Participants may use existing open datasets or generate their own.
How You’ll Be Judged – Evaluation Protocol
- 1. Accuracy, Precision, Recall, F1-score for classification tasks
- 2. mAP@[IoU=0.5] for object detection
- 3. Bonus for innovative deployments (Edge AI, Mobile App, Web Dashboard)
- 4. Emphasis on interpretability, scalability, and public health/social impact
Hackathon Structure
Round 1: Solution Design and Model Submission (45-50 days)
- Share your code via GitHub (or google colab)
- Submit a concise 4-page technical summary
Round 2: Online Evaluation and Shortlisting
- Share your code via GitHub (or google colab)
- Submit a concise 4-page technical summary
Round 3: Final Presentation Round (Live)
- Teams present a final solution, deployment, demo, and insights
- Submit a 15-slide presentation
Who Can Participate? – Eligibility Criteria & Payment Instructions
- 1. Open to UG/PG/PhD students across all institutions
- 2. Team Size: 2 members
- 3. Inter-college teams allowed
Registration Fee :INR 500 (India) / USD 50 (Outside India) + 18% GST
Payment Reference Number and Receipt Upload are Mandatory
Registration End Date: 8th October, 2025 | 30th October, 2025
Code Submission Deadline (Round 1): 15th October, 2025 | 15th November, 2025
Finalists Announcement after online evaluation and shortlisting (Round 2): 22nd October, 2025 | 22nd November, 2025
Online Presentation (Round 3): 1st & 2nd November, 2025 | 29th & 30th November, 2025