The code compiled. The tests passed. Then production crashed. This is the nightmare scenario CTOs and engineering managers face when considering AI-assisted migrations. I've been deep in the world of Objective-C to Swift migrations, and there's a critical gap most teams miss: AI tools excel at syntax conversion but fundamentally misunderstand how memory management works differently between Objective-C and Swift. I've identified five systematic mistakes that emerge in AI-assisted migrations: → Nullability assumptions that break implicit contracts → Forgotten observer cleanup creating memory leaks → Misunderstood closure captures causing retain cycles → Notification patterns that introduce memory leaks → Over-aggressive @objc annotations killing performance These aren't edge cases. They're predictable patterns from how AI approaches code conversion. The solution? Let AI handle the mechanical work, but pair it with experienced code review from someone who understands both memory models. I wrote up the details with examples for each pattern: https://lnkd.in/eztqPh6P It's worth a read if you're considering migrating from Objective-C to Swift.
Bart Jacobs’ Post
More Relevant Posts
-
Hear this out: nano services, disposable by design A nano service does one and only one thing, and we never touch it again. No patches, no refactoring, no extending it. We retry it. No regen. Each service exists within an orchestrated system that defines intent and validation, but the implementation beneath can be thrown away at will. In this new world: - The orchestration layer remains human owned. That’s where alignment resides. - The generation layer contains the algorithm and is fully delegated to AI. It doesn't even have to be human readable. The key here is nano, micro is just not enough. This separation of intent (ours) and execution (AI's) defines a new kind of software architecture. We forget about the debate about maintainable generated code, architectural drift, regen across prompts, etc. This is for people who use llms to generate old world code.
To view or add a comment, sign in
-
Lately, I’ve been thinking a lot about how software feels different than it used to. We have AI-assisted development, faster pipelines, and more automation than ever — yet reliability seems to be going in the opposite direction. I came across this piece, “The Great Software Quality Collapse,” and it really resonated: https://lnkd.in/gXeSEJy9 It captures something I’ve seen firsthand — that in our race to accelerate delivery, we’ve quietly accepted a lower standard of quality. We’ve optimized for speed, not sustainability. From a leadership and AI perspective, that trade-off is dangerous. AI won’t fix systemic cultural issues. If anything, it amplifies them — faster. Quality doesn’t fail in one big moment; it erodes through small compromises, justified one sprint at a time. It’s a thoughtful read — and a timely reminder that true progress isn’t just about how fast we can build, but how well we can sustain quality while using new tools. #AI #SoftwareQuality #EngineeringLeadership #TechCulture #DevOps #Architecture
To view or add a comment, sign in
-
Simple but powerful. - LLMs are calculating recursively the relationship between each token. If you load unnecessary instructions, the neural network can weigh this noise. - Memory is limited, as it increases, quality reduces drastically. There's a threshold to infer and adhere to prompts. Although having separate Agents to break down the overall completely is compelling, it brings complexity. A single Agent that knows how to load instructions at the right time seems easier and more promising. https://lnkd.in/dwcmiMDD
To view or add a comment, sign in
-
As code generation accelerates (driven by AI and growing engineering teams), we're witnessing a fundamental shift in how build systems must evolve. The pattern is undeniable: Bazel pioneered distributed caching, and now every major build system is following suit. This isn't a trend, it's table stakes for modern software development. Today's caching solutions follow a monolithic model: proprietary infrastructure tightly coupled with proprietary technology. This creates vendor lock-in and limits innovation. I believe we're at an inflection point. What if caching infrastructure worked like databases? You choose Postgres as your technology, but you're free to host with Supabase, AWS RDS, or run it yourself. The protocol is the standard, the vendor is your choice. At Tuist, we've built caching infrastructure for Xcode projects (from generated projects to native Xcode builds). We've seen firsthand where enterprises invest: in build performance and developer productivity. Caching is the unlock. Expanding this capability across build systems (Gradle, Swift, Cargo, and beyond) isn't just a business opportunity, it's an architectural imperative for the industry. We could build another proprietary solution. Instead, I'm exploring something different: a narrow-waist protocol that connects diverse build systems to a competitive ecosystem of infrastructure providers. And here's what's energizing: building this with AI coding agents is transforming how fast we can validate these ideas. The future of infrastructure tooling is being built with the tools of the future. The question isn't whether build caching becomes ubiquitous, it's whether we build it as open infrastructure or recreate the vendor silos of the past. I'm choosing open. #DeveloperTools #BuildSystems #OpenSource #DevOps #SoftwareEngineering #CloudInfrastructure #DeveloperProductivity #AI #FutureOfWork
To view or add a comment, sign in
-
“We've created a perfect storm: tools that amplify incompetence, used by developers who can't evaluate the output, reviewed by managers who trust the machine more than their people.” I struggle sometimes to explain to people that if it’s this bad with #AICoding, it’s quite a lot worse with #AI in #UserResearch, where the inputs are fuzzier and the outputs are even less deterministic. Good code validation can catch buggy #AIGeneratedCode, if it’s performed by a skilled developer (though the usually it’s less skilled developers trying to level up their capacity who’ll lean on #LLMs to generate code for them.) But it’s devilishly more difficult with #Qualitative #Research: most of the valuable context (expressions, body language, how a conversation was flowing, what’s going on in the room besides the subject performing her work or explaining herself) is missing from the transcript. Worse still: your mistakes will take longer to catch. Using #SyntheticUsers to generate ideas for a product or feature? You might not find out how catastrophically wrong you were until after it ships. Take care my friends, the future is fragile indeed. https://lnkd.in/eq_Zgruh
To view or add a comment, sign in
-
100% agreed on this - I see a consistent theme with AI tools that if the output looks good, it must be good, right? But that's a perilous attitude for research, where we need to make sure insights are an accurate reflection of reality - fortunately, it's honestly not that hard to bake some validation approaches into the process and tools: * A:B testing - if a human and a tool evaluate the same set of data, do they output similar results? * Real-time confirmation: AI tools used for qualitative analysis should export themes paired with the supporting comments - so a human using the tool can confirm that the two are aligned, and revise the final deliverable as needed when the tool misses things. This doesn't cover everything - for example, I've also encountered issues with AI tools missing context cues, even to the level of treating interviewer questions as feedback to incorporate into insights - but it can help ensure you're building on a solid foundation.
“We've created a perfect storm: tools that amplify incompetence, used by developers who can't evaluate the output, reviewed by managers who trust the machine more than their people.” I struggle sometimes to explain to people that if it’s this bad with #AICoding, it’s quite a lot worse with #AI in #UserResearch, where the inputs are fuzzier and the outputs are even less deterministic. Good code validation can catch buggy #AIGeneratedCode, if it’s performed by a skilled developer (though the usually it’s less skilled developers trying to level up their capacity who’ll lean on #LLMs to generate code for them.) But it’s devilishly more difficult with #Qualitative #Research: most of the valuable context (expressions, body language, how a conversation was flowing, what’s going on in the room besides the subject performing her work or explaining herself) is missing from the transcript. Worse still: your mistakes will take longer to catch. Using #SyntheticUsers to generate ideas for a product or feature? You might not find out how catastrophically wrong you were until after it ships. Take care my friends, the future is fragile indeed. https://lnkd.in/eq_Zgruh
To view or add a comment, sign in
-
A new study by theCUBE Research highlights Docker's ROI and its significant impact on agentic AI, security, and developer productivity. I found it interesting that about 400 IT and AppDev professionals underscored how Docker is enabling faster feature delivery and enhancing security in the software supply chain. What are your thoughts on the role of development platforms in modern enterprise environments? Read more: https://lnkd.in/e2jGexkk
To view or add a comment, sign in
-
How do you cut a months-long process down to just a few days? In the latest Wealthsimple Engineering Blog, Marina Samuel breaks down how AI is helping speed up large-scale migration times. Read about it here: https://lnkd.in/gHTb4jXJ
To view or add a comment, sign in
-
"Here's the most devastating long-term consequence: we're eliminating the junior developer pipeline. Companies are replacing junior positions with AI tools, but senior developers don't emerge from thin air. They grow from juniors who: * Debug production crashes at 2 AM * Learn why that "clever" optimization breaks everything * Understand system architecture by building it wrong first * Develop intuition through thousands of small failures Without juniors gaining real experience, where will the next generation of senior engineers come from? AI can't learn from its mistakes—it doesn't understand why something failed. It just pattern-matches from training data." source: https://lnkd.in/dwzQHy4X
To view or add a comment, sign in