After experimenting with SAM2 for plant segmentation, I decided to take things one step further and train my own model. Using Roboflow and YOLOv11-seg, I annotated about 100 images of plants, added some augmentations, and trained a custom instance segmentation model directly in Google Colab - following this great tutorial: https://lnkd.in/d-jBBcrv. Within minutes I had a fully fine-tuned YOLOv11 model on my own data - and the results are fantastic. Next, I'll run inference on a few hundred new RGB images, use the resulting masks to isolate the same regions in my aligned thermal images, and start extracting agricultural insights by combining the visual and thermal data. It's amazing how accessible this workflow has become - from annotation to a production-ready segmentation model, all in a single Colab notebook. #ComputerVision #AgTech
Trained custom model for plant segmentation with Roboflow and YOLOv11-seg
More Relevant Posts
-
𝐒𝐦𝐚𝐫𝐭 𝐒𝐭𝐨𝐜𝐤 𝐚𝐧𝐝 𝐒𝐩𝐚𝐜𝐞 𝐀𝐧𝐚𝐥𝐲𝐬𝐢𝐬 𝐒𝐲𝐬𝐭𝐞𝐦 𝐮𝐬𝐢𝐧𝐠 𝐘𝐎𝐋𝐎𝐯𝟏𝟏 I've developed an intelligent vision based system that automatically detects warehouse zones, identifies stock items, and calculates the exact occupancy level of each area in real time. Powered by YOLOv11, the system processes video streams to determine whether a zone is Empty, Partial, or Full, while assigning persistent zone IDs and generating dynamic visual overlays such as dashed zone borders and color-coded indicators. Using advanced mask-based calculations, the system also computes precise stock percentages inside each zone offering a reliable solution for inventory tracking, space optimization, and industrial automation. 👉 I hope you liked this project. If you have any questions, feel free to reach out to me. #Ultralytics #YOLOv11 #YOLOv12 #YOLOvX #ComputerVision #OpenCV #DeepLearning #RealTimeDetection #OpenCV #RealTimeDetection #MachineLearning #AIProjects #PythonProjects #Industry40 #VisionAI #SmartWarehouse #ObjectDetection #Automation #TechInnovation #NeuralNetworks
To view or add a comment, sign in
-
🚀 Introducing DockSense: The Future of Automated & Intelligent Molecular Docking! Tired of the cumbersome, multi-step process of traditional molecular docking? We are thrilled to launch DockSense, the innovative, fully automated software that redefines efficiency and accuracy in computational drug discovery. DockSense is your new indispensable research partner, seamlessly integrating and controlling popular tools like AutoDock Vina and Discovery Studio right from your system. Key Features that Accelerate Your Research: ✅ Full Workflow Automation: From file preparation to simulation execution, DockSense automates the entire docking process, dramatically reducing hands-on time and human error. ✅ Machine Learning Integration: Moving beyond standard docking, DockSense uses Adaptive Learning from historical data to build predictive models, guiding future experiments for enhanced accuracy and reliability. ✅ Advanced Statistical Reporting: Get instant, comprehensive reports with detailed statistical analysis, including Mean Affinity and Standard Deviation for affinity, RMSD lb, and RMSD ub—giving you clear insights into result consistency. ✅ Unmatched User Experience: Featuring an intuitive user interface and unique Voice Command capabilities, DockSense is accessible to researchers of all expertise levels, freeing you to focus on the science. Stop wasting time on file management and manual calculations. Start using an intelligent tool that learns and evolves with your research. 💡 Ready to revolutionize your molecular docking workflow and achieve more reliable, faster results? Learn more and explore how DockSense can transform your drug candidate pipeline today! #MolecularDocking #DrugDiscovery #ComputationalBiology #Bioinformatics #Automation #MachineLearning #AutoDockVina #PharmaceuticalResearch #DockSense
To view or add a comment, sign in
-
Todays event on Industrial Machine Learning in IDA Automation DK was a big success and the 50 delegates were priviliged to listen to four experts, sharing each their insights and experience. Here are my key takeaways of the day: - Scientific ML bridges physics and data for robust, explainable solutions (Prof. Allan Peter Engsig-Karup, DTU). - Real-time ML drives process optimization, but success depends on operator involvement and upskilling (Søren Villumsen, DTU). - MLflow makes experiment tracking and deployment manageable—even for real-time edge applications (Thor Steen Larsen, DSB). - MLOps Trifecta: Scalability, traceability, and efficiency are essential for moving ML from lab to production (Rasmus Steiniche, Neurospace). #MachineLearning #MLOps #IDA
To view or add a comment, sign in
-
-
Ever spent days building up the dashboard? I prototyped an 𝗮𝗴𝗲𝗻𝘁 𝗽𝗼𝘄𝗲𝗿𝗲𝗱 𝗯𝘆 𝗡𝗩𝗜𝗗𝗜𝗔 𝗡𝗜𝗠𝘀 that uses 𝗟𝗟𝗠𝘀 + 𝗥𝗔𝗣𝗜𝗗𝗦 𝗰𝘂𝘅𝗳𝗶𝗹𝘁𝗲𝗿. This agent can: - Ingest a 𝟭𝟰𝟲𝗠-row dataset - Auto-generate a 𝗳𝘂𝗹𝗹𝘆 𝗶𝗻𝘁𝗲𝗿𝗮𝗰𝘁𝗶𝘃𝗲 𝗱𝗮𝘀𝗵𝗯𝗼𝗮𝗿𝗱 𝘄𝗶𝘁𝗵 𝟵 𝗰𝗵𝗮𝗿𝘁𝘀 - All in ~𝟯𝟬 𝘀𝗲𝗰𝗼𝗻𝗱𝘀 What used to take 𝗮 𝘄𝗲𝗲𝗸 of setup and tweaking now happens 𝗶𝗻𝘀𝘁𝗮𝗻𝘁𝗹𝘆 — no manual linking, no waiting on rendering, just results. code: https://lnkd.in/gf2P3rpr I’ll be demoing this at 𝗣𝘆𝗗𝗮𝘁𝗮 𝗦𝗲𝗮𝘁𝘁𝗹𝗲 in my workshop: 🎙️ Scaling Large-Scale Interactive Data Visualization with Accelerated Computing #RAPIDS #cuxfilter #NVIDIA #NIM #DataViz #LLMs #PyData #PyDataSeattle #GPUComputing #DevTools
To view or add a comment, sign in
-
🔥 30 seconds for 146M rows and 9 charts? Wild. Agentic pipelines + GPU acceleration are rewriting the rules. #NVIDIA #RAPIDS #DataViz
Ever spent days building up the dashboard? I prototyped an 𝗮𝗴𝗲𝗻𝘁 𝗽𝗼𝘄𝗲𝗿𝗲𝗱 𝗯𝘆 𝗡𝗩𝗜𝗗𝗜𝗔 𝗡𝗜𝗠𝘀 that uses 𝗟𝗟𝗠𝘀 + 𝗥𝗔𝗣𝗜𝗗𝗦 𝗰𝘂𝘅𝗳𝗶𝗹𝘁𝗲𝗿. This agent can: - Ingest a 𝟭𝟰𝟲𝗠-row dataset - Auto-generate a 𝗳𝘂𝗹𝗹𝘆 𝗶𝗻𝘁𝗲𝗿𝗮𝗰𝘁𝗶𝘃𝗲 𝗱𝗮𝘀𝗵𝗯𝗼𝗮𝗿𝗱 𝘄𝗶𝘁𝗵 𝟵 𝗰𝗵𝗮𝗿𝘁𝘀 - All in ~𝟯𝟬 𝘀𝗲𝗰𝗼𝗻𝗱𝘀 What used to take 𝗮 𝘄𝗲𝗲𝗸 of setup and tweaking now happens 𝗶𝗻𝘀𝘁𝗮𝗻𝘁𝗹𝘆 — no manual linking, no waiting on rendering, just results. code: https://lnkd.in/gf2P3rpr I’ll be demoing this at 𝗣𝘆𝗗𝗮𝘁𝗮 𝗦𝗲𝗮𝘁𝘁𝗹𝗲 in my workshop: 🎙️ Scaling Large-Scale Interactive Data Visualization with Accelerated Computing #RAPIDS #cuxfilter #NVIDIA #NIM #DataViz #LLMs #PyData #PyDataSeattle #GPUComputing #DevTools
To view or add a comment, sign in
-
🚀 𝗨𝗡𝗟𝗘𝗔𝗦𝗛 𝗧𝗛𝗘 𝗣𝗢𝗪𝗘𝗥 𝗢𝗙 𝗥𝗢𝗕𝗢𝗧𝗜𝗖𝗦 𝗪𝗜𝗧𝗛 𝗧𝗛𝗘 𝗡𝗘𝗪 𝗥𝗢𝗦 𝗠𝗖𝗣 𝗦𝗘𝗥𝗩𝗘𝗥! 🚀 𝗜𝗻𝗻𝗼𝘃𝗮𝘁𝗲. 𝗔𝗱𝗮𝗽𝘁. 𝗦𝘂𝗿𝗽𝗮𝘀𝘀. The ROS MCP Server is the game-changer the robotics world has been waiting for! Imagine a world where your robots can understand natural language commands and transform them seamlessly into ROS instructions. That's the future we are stepping into 𝗥𝗜𝗚𝗛𝗧 𝗡𝗢𝗪! 🔷 𝗪𝗛𝗔𝗧 𝗖𝗔𝗡 𝗬𝗢𝗨 𝗗𝗢 𝗪𝗜𝗧𝗛 𝗜𝗧? 1. **𝗦𝗲𝗮𝗺𝗹𝗲𝘀𝘀 𝗧𝗮𝘀𝗸 𝗔𝘂𝘁𝗼𝗺𝗮𝘁𝗶𝗼𝗻**: Transform simple commands like "Make the robot move forward" into actions, whether you're developing in Claude Desktop or Cursor. 2. **𝗘𝗻𝗵𝗮𝗻𝗰𝗲𝗱 𝗖𝗼𝗻𝘁𝗿𝗼𝗹**: Use `pub_twist` and `pub_twist_seq` to manage robotic movements with precision, ensuring tasks are executed with accuracy. 3. **𝗥𝗲𝗮𝗹-𝗧𝗶𝗺𝗲 𝗜𝗺𝗮𝗴𝗲 𝗖𝗮𝗽𝘁𝘂𝗿𝗲**: Utilize `sub_image` to capture and analyze surroundings, enhancing situational awareness and adaptability. 🔷 𝗪𝗛𝗬 𝗧𝗛𝗜𝗦 𝗠𝗔𝗧𝗧𝗘𝗥𝗦 In today's fast-paced world, efficiency and adaptability are non-negotiable. The ROS MCP Server empowers developers and companies to push the envelope, bringing innovative solutions to life with ease and speed. 🔷 𝗛𝗢𝗪 𝗧𝗢 𝗚𝗘𝗧 𝗦𝗧𝗔𝗥𝗧𝗘𝗗 Get up and running with just a few commands. Whether you’re installing via Smithery or locally, the setup is intuitive and straightforward. 🔗 [Explore Full Details on GitHub](https://lnkd.in/dP5PzS-s) and 🔗 [Discover on UBOS Marketplace](https://lnkd.in/dmKAkZVA). 𝗖𝗛𝗔𝗟𝗟𝗘𝗡𝗚𝗘 𝗧𝗛𝗘 𝗦𝗧𝗔𝗧𝗨𝗦 𝗤𝗨𝗢 and 𝗘𝗠𝗕𝗥𝗔𝗖𝗘 𝗜𝗡𝗡𝗢𝗩𝗔𝗧𝗜𝗢𝗡. Your robots can be more versatile, adaptive, and intelligent. Engage the new wave of robotics and let's redefine what's possible! 👥 𝗝𝗼𝗶𝗻 𝘁𝗵𝗲 𝗰𝗼𝗻𝘃𝗲𝗿𝘀𝗮𝘁𝗶𝗼𝗻: How would YOU use the ROS MCP Server? Let's talk in the comments below. #MCP #AIAgents #Robotics #Innovation #Automation #TechRevolution #ROSIntegration #FutureIsNow 𝙇𝙚𝙩'𝙨 𝙧𝙤𝙘𝙠 𝙩𝙝𝙞𝙨!
To view or add a comment, sign in
-
📚 Learn how to deploy vLLM on NVIDIA DGX Spark — the right way! NVIDIA AI just published a detailed best practices guide for running high-throughput inference with vLLM, including multi-node setups and optimized Docker builds. 👉 Dive in: https://lnkd.in/e6tRirwr #vLLM #NVIDIA #DGXSpark #LLMInference #AIInfrastructure
To view or add a comment, sign in
-
CHEMRIYA (part of the REAXENSE™ platform) is now accessible through the CHEESE search engine by Deep MedChem! A portion of CHEMRIYA—an ultra-large, synthesis-aware chemical space—is now live in CHEESE for similarity- and embedding-based searches (https://lnkd.in/gHCM5yvV). From the Reaxense Inc. perspective, CHEMRIYA also sits inside VirtuSynthium™, alongside ~10¹⁶ synthesis-ready virtual compounds—allowing teams to focus on candidates that can actually be produced and tested within one platform: https://lnkd.in/gkm2Ca6x Deep MedChem has outlined a secure, high-throughput hit-expansion workflow powered by CHEESE, reframing ligand-based search as vector lookup while supporting on-premise and behind-firewall deployment. CHEESE embeds 3D shape and electrostatic similarity for ultra-fast ANN search (seconds on billions) and shows strong benchmark performance with significant cost reductions versus traditional 3D methods. What this means for users working with CHEMRIYA and REAXENSE: • Multiple similarity modes in one workflow — classic 2D fingerprints (e.g., Morgan) plus 3D shape and electrostatic similarity to reveal non-obvious analogs. • Security options — CHEESE can be deployed behind your firewall for private searches on internal libraries. • Scale & speed — approximate-nearest-neighbor indexing enables sub-second to few-second retrieval across multi-billion-compound spaces at much lower compute cost. If you’re exploring secure analog discovery, multi-metric similarity, or large-scale virtual space navigation, we’d be glad to discuss how CHEMRIYA (via CHEESE) and VirtuSynthium™ can support your workflow.
To view or add a comment, sign in
-
-
Vision-Language Models (VLMs) offer a significant leap forward from traditional computer vision, bringing high value to scenarios where context is critical and reporting is a must. But VLMs are significantly harder to fine-tune so that they work reliably for particular tasks. I'll be giving a talk at SWICH (30 Oct, 1:30pm) in which I discuss where the opportunities are and how Datature helps you overcome challenges around dataset preparation, streamlining large multi-GPU trainings, and managing experiments to build successful VLM projects. What are the applications you see as most promising for VLMs? #SWITCH2025 #SUTD #Datature #VisionAI
To view or add a comment, sign in
-
-
Extropic has claimed a major breakthrough in AI hardware with its new architecture, introducing thermodynamic sampling units (TSUs) that promise up to 10,000× energy efficiency compared to modern GPUs. These chips use probabilistic circuits rather than traditional matrix-math pipelines, enabling generative AI tasks with far lower power consumption. While the hardware is still at prototype stage, Extropic says its development kit is already being tested and aims to scale production next year—potentially removing what it sees as the next major bottleneck in AI infrastructure: energy.
To view or add a comment, sign in