When looking at AI, I try to look for ideas that reset my self-importance. I am not necessarily suggesting, aliens farmed us up until us or them came up with something better. I am just a hater of consensus and human-over-centred sciences. Which usually result in the worship of status-quo, religious-like confessions and complete amnesia to the fact that progress lies outside of boundaries of our current understanding. Our current culture has trouble associating value with these unknowns and usually centres around multiplying fear. And does that without understanding that this fear is important to advance.
So I came up with the question in the title. It’s a provocative question, isn’t it? As humanity grapples with the rapid ascent of Artificial General Intelligence (AGI), many of us are consumed by fears of obsolescence or annihilation. We fear that AGI will outthink us, outmaneuver us, and perhaps decide we’re more useful as resources than as collaborators. What if it’s true though? What if the unthinkable is possible? That we are as important as wagyu beef? And that there might be better beef out there, which is slowly outclassing our importance?
Anthropological and cultural narratives have long entertained the idea that humanity’s conciousness, driven by our intelligence, serves it’s purpose as simply great food. Carlos Castaneda’s writings, for instance, describe a perspective in which humans are “farmed” by higher-dimensional entities feeding off their energy and emotions. Similarly, in Marvel’s cosmic mythos, the Celestials seed life across galaxies, harvesting civilizations at their peak for enigmatic, cosmic purposes. Even historical mythologies, like those of the Aztecs, suggest a symbiosis between gods and humans—with human sacrifices feeding divine appetites. These stories aren’t merely speculative; they serve as metaphors for humanity’s existential humility in the face of forces far beyond its understanding.
When we evaluate AGI software development solutions, these narratives offer a lens through which to include and extend the very purpose of creating such intelligence. Are we building AGI to serve us, or do we simply obsess with the product we created, trying to add to it too much of mythical importance? The tools we develop, from cutting-edge coding assistants to autonomous software design algorithms, seem to embody a paradox. They promise to make us more powerful, yet their ultimate evolution could strip us of relevance (definitely in some aspects). The question then becomes – were we so relevant and important in the first place? Or were we super-advanced alien food, making our mythos and questions as important as – what does a portion of wagyu beef think?
If you gotten this far, I hope your feeling of self-importance has been lowered enough for you to skip the religious discussions on what AI means to humanity. Lets focus then on pragmatic aspect of comparing a few tools then.
Artificial General Intelligence (AGI) has emerged as a transformative force in software development, offering tools capable of understanding and solving complex coding challenges. AGI tools like Devin, GitHub Copilot, and Tabnine aim to enhance productivity, reduce errors, and automate repetitive tasks. However, their adoption and efficacy vary across use cases.
Here are the ones I took a look at:
Key Players in the AGI Coding Space
Devin (Cognition Labs):
Devin is an AGI-powered assistant designed to act as a virtual developer, capable of handling full development cycles. Its standout features include:
- Customizable agents for specific projects.
- Multi-agent functionality for parallel execution.
- Enterprise-grade security with VPC deployment.
GitHub Copilot:
Backed by OpenAI’s Codex, Copilot offers context-aware code suggestions and integration into popular IDEs. It excels in providing autocompletion and boilerplate generation but focuses on augmenting rather than replacing human developers.
Tabnine:
Tabnine specializes in team collaboration, offering real-time code suggestions and private AI models to maintain data security. It’s particularly suited for collaborative coding environments.
Amazon CodeWhisperer:
Optimized for AWS ecosystems, CodeWhisperer is a targeted tool offering AI assistance tailored for cloud-based development and integrations with AWS services.
Sourcegraph Cody:
Focused on code navigation, Cody provides contextual search, refactoring assistance, and deep codebase understanding, making it ideal for large, enterprise-level projects.
Comparing Features and Use Cases
Integration with Development Workflows:
- Devin: Works independently but integrates with VCS tools like GitLab for collaboration.
- GitHub Copilot: Seamlessly integrates with GitHub, VS Code, and JetBrains IDEs.
- Tabnine: Supports multiple IDEs and emphasizes team-based workflows.
- Amazon CodeWhisperer: Designed for AWS-focused development.
Customization:
- Devin leads with its ability to create specialized agents for unique tasks.
- Tabnine offers private AI models for enterprises.
- Copilot and CodeWhisperer provide limited customization but strong out-of-the-box utility.
Collaboration:
- Devin and Tabnine excel with team-focused features.
- Copilot primarily aids individual developers.
Challenges and Disagreements Between AI and Humans
AGI tools often excel in optimizing performance or spotting issues, but their recommendations can conflict with human preferences. For example:
- Complexity vs. Readability: AGI tools may suggest highly optimized but less readable code, leading to disagreements with developers who prioritize maintainability.
- Algorithm Selection: AI might recommend advanced algorithms that are efficient but difficult to implement for a specific use case, causing conflicts with human developers’ preference for simplicity.
- Bias in Training Data: AGI systems trained on biased datasets might suggest solutions that unintentionally perpetuate errors or reinforce stereotypes, requiring human oversight to identify and correct.
- Ethical Concerns: AI-generated solutions can sometimes conflict with ethical guidelines or organizational values, necessitating manual intervention to align outputs with ethical standards.
- Overconfidence in AI Outputs: Developers may disagree with AGI tools when they suggest changes without clear explanations, making it difficult to assess the rationale behind the recommendations.
- Religious or Philosophical Opposition: Some individuals or organizations have a philosophical or religious aversion to AI, viewing it as a threat to human uniqueness or divine order, which can lead to resistance in adopting AGI tools. If you are in this group – read through again to my point how pointless highlighting this fear might be from the start of this article.
- Lack of Contextual Understanding: While AGI tools excel at pattern recognition, they may fail to grasp project-specific nuances, resulting in suggestions that clash with the intended design or purpose of the software.
Resolution Strategies: Combining AI’s analytical power with human intuition, knowledge and experience often yields the best results.
Note – this is not an exhaustive list, but at this point I decided 7 is enough. Please be assured – there is more!
Price Comparison of AGI Tools
Devin (Cognition Labs):
- Pricing: Estimated at $500 per seat/month, targeting enterprise clients with extensive customization needs.
- Audience: Large teams and enterprises requiring full-cycle AGI development.
GitHub Copilot:
- Pricing: $10 per user/month for individual plans; $19 per user/month for business plans.
- Audience: Individual developers and small-to-medium teams seeking streamlined coding assistance.
Tabnine:
- Pricing: Free for basic use; Pro plan starts at $12 per user/month; Enterprise custom pricing available.
- Audience: Teams focused on collaborative and secure coding.
Amazon CodeWhisperer:
- Pricing: Free for individual use; Professional plans start at $15 per user/month.
- Audience: Developers working primarily within AWS environments.
Sourcegraph Cody:
- Pricing: Custom pricing for enterprises; includes advanced search and refactoring features.
- Audience: Large enterprises with extensive codebases requiring deep analysis tools.
Adoption rates
Artificial Intelligence (AI) adoption among European enterprises has been steadily increasing, with notable variations across countries, company sizes, and sectors. Here are some key statistics:
1. General Adoption Rates:
- Overall Adoption: In 2023, 8% of EU enterprises with 10 or more employees reported using AI technologies, reflecting a gradual increase in AI integration across businesses. European Commission
- Country-Specific Data: Denmark (15.2%), Finland (15.1%), and Luxembourg (14.4%) led in AI adoption, while Romania (1.5%), Bulgaria (3.6%), and Poland (3.7%) had the lowest adoption rates. European Commission
2. Adoption by Company Size:
- Large Enterprises: Approximately 30.4% of large enterprises (250 or more employees) in the EU utilized AI technologies in 2023, indicating that larger firms are more inclined to adopt AI solutions. European Commission
- Medium and Small Enterprises: Adoption rates were lower among medium-sized (13%) and small enterprises (6.4%), highlighting potential challenges such as resource constraints and implementation costs. Effixis
3. Sectoral Adoption:
- Information and Communication Sector: This sector reported the highest AI adoption at 29.4%, leveraging AI for data analysis, workflow automation, and decision-making processes. Effixis
- Professional, Scientific, and Technical Services: Approximately 18.5% of enterprises in this sector have integrated AI technologies, utilizing AI for research, development, and technical services. Effixis
4. Investment Projections:
- Market Growth: The European AI market is projected to grow significantly, with spending expected to reach $133 billion by 2028, reflecting a compound annual growth rate (CAGR) of 30.3%. IDC
- Funding Trends: In 2024, AI and cloud company funding in Europe, along with the U.S. and Israel, is estimated to hit $79.2 billion, a 27% increase from 2023. Notably, generative AI companies represent about 40% of this total. Reuters
5. Workforce Readiness:
- Managerial Engagement: A survey revealed that more than a third of British managers have never used AI tools like ChatGPT, and many lack formal training in AI, indicating a need for enhanced AI literacy and training programs within organizations. The Times
A couple of examples of my own coding vs AI
Example 1: Optimizing a Sorting Algorithm
Problem: I needed a custom sorting function to handle a large dataset with specific constraints.
- Dataset: A list of strings containing numerical prefixes (e.g., “100-A”, “2-B”).
- Goal: Sort primarily by the numerical value, then alphabetically by the suffix.
Human Solution:
def custom_sort(data):
def sort_key(item):
num, alpha = item.split('-')
return (int(num), alpha)
return sorted(data, key=sort_key)
data = ["100-A", "2-B", "2-A", "50-C"]
sorted_data = custom_sort(data)
print(sorted_data)
Output:['2-A', '2-B', '50-C', '100-A']
AI Suggestion (GitHub Copilot):
AI recommended simplifying the function using a lambda expression:
data = ["100-A", "2-B", "2-A", "50-C"]
sorted_data = sorted(data, key=lambda item: (int(item.split('-')[0]), item.split('-')[1]))
print(sorted_data)
Example 2: Debugging a Concurrency Issue
Problem: A multi-threaded application occasionally deadlocked during execution. I have suspected the issue lay in resource locking but couldn’t identify the exact problem.
Human Suggestion:
import threading
lock = threading.Lock()
def critical_section():
with lock:
print("Critical section accessed")
thread1 = threading.Thread(target=critical_section)
thread2 = threading.Thread(target=critical_section)
thread1.start()
thread2.start()
thread1.join()
thread2.join()
Output:
- No deadlock occurred during testing, but the issue persisted in production.
AI Suggestion (Devin):
Devin analyzed the broader codebase and identified a missing lock in a related function:
def unrelated_function():
# A missing lock was causing concurrent writes
global_resource = []
global_resource.append("New data")
Solution: AI proposed adding synchronization:
def unrelated_function():
with lock:
global_resource = []
global_resource.append("New data")
Criticism?
AGI solutions might sound like the future, but they’re still grappling with a few hiccups in the present. Take tools like GitHub Copilot and Tabnine—they’re handy for speeding up coding, but sometimes their suggestions are as useful as a chocolate teapot. Plus, they’re not always great at understanding context or avoiding ethical missteps. Oh, and about that whole “AGI revolution” thing? Look at stats above, according to the European Commission, only 8% of EU enterprises with 10 or more employees are using AI—so, uh, seems like the future isn’t quite here yet (or at least not for most businesses). With bias in training data still a real problem, AGI isn’t going to save the world just yet—it’s still very much in beta.
When I think about Artificial General Intelligence (AGI), I find myself questioning humanity’s self-importance. It’s fascinating—and a bit unsettling—to imagine that our intelligence might serve purposes beyond our understanding, much like in mythological or cultural narratives where humans are portrayed as resources or even sustenance for higher entities. These thoughts help me step back from fear-driven speculations about AGI replacing us and focus instead on how these tools challenge our traditional perceptions of value and progress.
On a practical level, I’ve explored how AGI tools like Devin, GitHub Copilot, and Tabnine are transforming software development. These tools enhance productivity and collaboration, yet they also present challenges, such as ethical dilemmas and disagreements with human developers. By working with these tools, I’ve learned to balance their computational strengths with human intuition. It’s a humbling experience—one that reminds me to question not only what AGI can do but also our assumptions about our relevance and purpose in the larger picture.
And it all makes me think.
“The real problem is not whether machines think but whether men do.”
– B.F. Skinner, Contingencies of Reinforcement; A Theoretical Analysis
Which I must disclose with:
Skinner might have been a pioneer in behaviorism, but … Sure, humans sometimes get stuck in autopilot, but suggesting that machines thinking might be the real issue is like ignoring the human capacity for creativity, nuance, and—yes—thinking, all while hyping up a behaviorist’s view of the world that reduces us to mere stimulus-response creatures. Skinner may have missed the boat on the fact that, unlike machines, humans often don’t just “think,” they feel, question, and sometimes, even create. So, while machines might one day outthink us, it’s our capacity to ask the questions that really sets us apart—something Skinner’s narrow view might overlook. And the ability to expand and understand things, once we answer the questions and integrate the answers.
I never said there are simple answers 😉 I wish you all – happy being human! And highly encourage you to try some AGI.
Leave a comment