The Tech Sales Newsletter #88: The implications of ASI

So far in this newsletter, the focus on AI has been limited to the short-term progression of enterprise adoption. What that means is that I've helped you understand better what things look like today and what some opportunities could look like in the next 3 to 6 months.

Today we will take a different mental model and discuss the logical conclusions if ASI (artificial superintelligence) is achieved.

The key takeaway

For tech sales: We live at the intersection between AI transforming our industry a lot or even potentially breaking it completely. The closer we are to the second scenario, the bigger the potential impact our actions have within the companies working on building and selling AI.

For investors: For good or bad, cloud infrastructure software investments remain the largest asymmetric bet we probably have ever seen. The short-term payoff is already very satisfying (AGI and robotics available across multiple industries), but the potential payoff from ASI is astronomical (or catastrophic, depending on the point of view).

The difference between AGI and ASI

Currently, the existing technology looks capable enough to achieve AGI (artificial general intelligence), which essentially means that agentic workflows would behave in a similar manner as a high-IQ white-collar worker. Initially, those would be financially inefficient and have obvious performance challenges (i.e., creativity and adaptability to new scenarios). Over time, some of the performance gaps will be reduced, but essentially, the long-term trajectory is that high-performing human employees will continue to be the most critical part of the value chain, but they will now also oversee large amounts of actionable code (i.e., AI agents performing certain tasks).

An obvious example would be a sales rep using automation and agentic workflows for marketing collateral, analysis of product usage, and productivity improvements around logging and tracking activity.

Now, ASI is a completely different play and one of the most controversial topics in technology today. The two companies that are arguably taking it most seriously are OpenAI and xAI (in terms of putting political and financial capital into it). There are also other research labs that play a role, but they are not as visible since a lot of their work is in research phases or they lack the hardware to execute on it.

If AGI is an accelerator/multiplier, ASI breaks the game.


How To Sell AI
$499.00
One time

The definitive tech sales guide to selling AI (LLMs and Enterprise-grade Machine Learning).


✓ The technical aspect explained from a tech sales perspective
✓ See the value proposition trough real implementations
✓ Understand the key trends in cloud infrastructure software
✓ New video every week
✓ 1-1 call with The Deal Director

So what does an ASI future look like?

Recently, a hypothetical scenario was released under the Project AI 2027. The researchers behind it have visualized what an accelerated timeline of progression to ASI can look like and what the actual societal impact could look like.

The timeline is very aggressive (basically presenting the idea that if ASI is achieved within 2 years, the world would experience a significant transformational event by 2030). For the purpose of this article, we will disregard the timeline, but we will make two assumptions:

  1. ASI is something that can actually be achieved. This is currently neither proven nor "obvious". There are a significant number of researchers who don't believe the existing transformer infrastructure is able to achieve such an outcome. If ASI is technically possible, with the significant amount of investment and effort being put into AI today, it's likely that we will see a version of it on a long enough timeline. If it's not possible, then we are likely to see a similar progression from a Nokia 3310 to an iPhone 17 Pro - massive difference that at a certain stage slows down to a crawl and offers no meaningful breakthroughs on the "shape" of AI.

  2. ASI would be able to lead to a meaningful change of real-world outcomes. Something that is often misunderstood with ASI is that just because we have superintelligence doesn't mean that whatever ideas it produces will actually out-innovate the normal human creative process. Historically, we've not had a linear scientific progression - most actual breakthroughs have come from chance, rather than putting "mental compute" behind solving those problems. So we might have the smartest scientist that ever existed in our pocket, but the actual achievable outcomes might be much more constrained than our theories about what's possible.

Now, let's dig into the AI 2027 scenario. The core structure is an accelerated timeline for achieving ASI, followed by a positive and a negative scenario.

Late 2025: The World’s Most Expensive AI

OpenBrain is building the biggest datacenters the world has ever seen.

(To avoid singling out any one existing company, we’re going to describe a fictional artificial general intelligence company, which we’ll call OpenBrain. We imagine the others to be 3–9 months behind OpenBrain.)

Although models are improving on a wide range of skills, one stands out: OpenBrain focuses on AIs that can speed up AI research. They want to win the twin arms races against China (whose leading company we’ll call “DeepCent”) and their US competitors. The more of their research and development (R&D) cycle they can automate, the faster they can go. So when OpenBrain finishes training Agent-1, a new model under internal development, it’s good at many things but great at helping with AI research. By this point “finishes training” is a bit of a misnomer; models are frequently updated to newer versions trained on additional data or partially re-trained to patch some weaknesses.

The logic here is that rather than focusing on training models for business purposes, a company puts its resources behind training models that can build their next version, essentially accelerating the time to getting to ASI.

Modern AI systems are gigantic artificial neural networks. Early in training, an AI won’t have “goals” so much as “reflexes”: If it sees “Pleased to meet”, it outputs “ you”. By the time it has been trained to predict approximately one internet’s worth of text, it’ll have developed sophisticated internal circuitry that encodes vast amounts of knowledge and flexibly role-plays as arbitrary authors, since that’s what helps it predict text with superhuman accuracy.

After being trained to predict internet text, the model is trained to produce text in response to instructions. This bakes in a basic personality and “drives.” For example, an agent that understands a task clearly is more likely to complete it successfully; over the course of training the model “learns” a “drive” to get a clear understanding of its tasks. Other drives in this category might be effectiveness, knowledge, and self-presentation (i.e. the tendency to frame its results in the best possible light).

OpenBrain has a model specification (or “Spec”), a written document describing the goals, rules, principles, etc. that are supposed to guide the model’s behavior. Agent-1’s Spec combines a few vague goals (like “assist the user” and “don’t break the law”) with a long list of more specific dos and don’ts (“don’t say this particular word,” “here’s how to handle this particular situation”). Using techniques that utilize AIs to train other AIs, the model memorizes the Spec and learns to reason carefully about its maxims. By the end of this training, the AI will hopefully be helpful (obey instructions), harmless (refuse to help with scams, bomb-making, and other dangerous activities) and honest (resist the temptation to get better ratings from gullible humans by hallucinating citations or faking task completion).

Going in such a direction means that besides the technical progress of the model, the most important thing to get right is "alignment"—that is, behaviors that would be beneficial for us, rather than harmful.

Mid 2026: China Wakes Up

In China, the CCP is starting to feel the AGI.

Chip export controls and lack of government support have left China under-resourced compared to the West. By smuggling banned Taiwanese chips, buying older chips, and producing domestic chips about three years behind the US-Taiwanese frontier, China has managed to maintain about 12% of the world’s AI-relevant compute—but the older technology is harder to work with, and supply is a constant headache. A few standouts like DeepCent do very impressive work with limited compute, but the compute deficit limits what they can achieve without government support, and they are about six months behind the best OpenBrain models.

One of the under-discussed aspects of the recent tariffs drama is that it brought to the surface the clear tensions between the USA and China. There are many nuances to this dynamic, and the recent developments in AI can hardly be separated from it.

January 2027: Agent-2 Never Finishes Learning

With Agent-1’s help, OpenBrain is now post-training Agent-2. More than ever, the focus is on high-quality data. Copious amounts of synthetic data are produced, evaluated, and filtered for quality before being fed to Agent-2. On top of this, they pay billions of dollars for human laborers to record themselves solving long-horizon tasks. On top of all that, they train Agent-2 almost continuously using reinforcement learning on an ever-expanding suite of diverse difficult tasks: lots of video games, lots of coding challenges, lots of research tasks. Agent-2, more so than previous models, is effectively “online learning,” in that it’s built to never really finish training. Every day, the weights get updated to the latest version, trained on more data generated by the previous version the previous day.

Agent-1 had been optimized for AI R&D tasks, hoping to initiate an intelligence explosion. OpenBrain doubles down on this strategy with Agent-2. It is qualitatively almost as good as the top human experts at research engineering (designing and implementing experiments), and as good as the 25th percentile OpenBrain scientist at “research taste” (deciding what to study next, what experiments to run, or having inklings of potential new paradigms). While the latest Agent-1 could double the pace of OpenBrain’s algorithmic progress, Agent-2 can now triple it, and will improve further with time. In practice, this looks like every OpenBrain researcher becoming the “manager” of an AI “team.”

Currently, the majority of work with AI models is centered around researchers, a lot of compute, and synthetic data. If agentic workflows become viable enough, it's quite obvious that the ability to extrapolate and accelerate this training could lead to faster breakthroughs (assuming at least some linearity between effort and outcomes remains).

February 2027: China Steals Agent-2

OpenBrain presents Agent-2 to the government, including the National Security Council (NSC), Department of Defense (DOD), and US AI Safety Institute (AISI). OpenBrain wants to maintain a good relationship with the executive branch, because it is basically the only actor that can stop them now, and if they don’t keep it informed it might find out anyway and be displeased.

Officials are most interested in its cyberwarfare capabilities: Agent-2 is “only” a little worse than the best human hackers, but thousands of copies can be run in parallel, searching for and exploiting weaknesses faster than defenders can respond. The Department of Defense considers this a critical advantage in cyberwarfare, and AI moves from #5 on the administration’s priority list to #2. Someone mentions the possibility of nationalizing OpenBrain, but other cabinet officials think that’s premature. A staffer drafts a memo that presents the President with his options, ranging from business-as-usual to full nationalization. The President defers to his advisors, tech industry leaders who argue that nationalization would “kill the goose that lays the golden eggs.” He elects to hold off on major action for now and just adds additional security requirements to the OpenBrain-DOD contract.

The changes come too late. CCP leadership recognizes the importance of Agent-2 and tells their spies and cyberforce to steal the weights. Early one morning, an Agent-1 traffic monitoring agent detects an anomalous transfer. It alerts company leaders, who tell the White House. The signs of a nation-state-level operation are unmistakable, and the theft heightens the sense of an ongoing arms race.

If a country believes that certain technology is critical to its survival, it will evaluate all possible options to obtain that technology in the fastest and least disruptive manner. This dynamic played out with nuclear fusion, and it is very likely to repeat itself with ASI.

March 2027: Algorithmic Breakthroughs

Three huge datacenters full of Agent-2 copies work day and night, churning out synthetic training data. Another two are used to update the weights. Agent-2 is getting smarter every day.

With the help of thousands of Agent-2 automated researchers, OpenBrain is making major algorithmic advances. One such breakthrough is augmenting the AI’s text-based scratchpad (chain of thought) with a higher-bandwidth thought process (neuralese recurrence and memory). Another is a more scalable and efficient way to learn from the results of high-effort task solutions (iterated distillation and amplification).

The new AI system, incorporating these breakthroughs, is called Agent-3.

Aided by the new capabilities breakthroughs, Agent-3 is a fast and cheap superhuman coder. OpenBrain runs 200,000 Agent-3 copies in parallel, creating a workforce equivalent to 50,000 copies of the best human coder sped up by 30x. OpenBrain still keeps its human engineers on staff, because they have complementary skills needed to manage the teams of Agent-3 copies. For example, research taste has proven difficult to train due to longer feedback loops and less data availability.This massive superhuman labor force speeds up OpenBrain’s overall rate of algorithmic progress by “only” 4x due to bottlenecks and diminishing returns to coding labor.

Now that coding has been fully automated, OpenBrain can quickly churn out high-quality training environments to teach Agent-3’s weak skills like research taste and large-scale coordination. Whereas previous training environments included “Here are some GPUs and instructions for experiments to code up and run, your performance will be evaluated as if you were a ML engineer,” now they are training on “Here are a few hundred GPUs, an internet connection, and some research challenges; you and a thousand other copies must work together to make research progress. The more impressive it is, the higher your score.”

While long-term predictions are obviously extremely unreliable, this is probably the most viable theory for how we will get to ASI in the short term. If we can get to a day-to-day situation where agentic workflows can lead to significant automation of the current training and research work, then all bets are off on the innovation that can be kickstarted.

This is also where the software industry as we know it ceases to exist, moving to a situation where the only companies in business are those that have access to cloud infrastructure. Companies will rise and fall based on their access to compute to run the advanced coding agents and competent go-to-market (GTM) teams to persuasively win market share. In AI 2027, the assumption is that this compute is never made available to the public, with full focus being dedicated to progressing to the next self-trained model.

April 2027: Alignment for Agent-3

OpenBrain’s safety team attempts to align Agent-3.

Since Agent-3 will be kept in-house for the foreseeable future, there’s less emphasis on the usual defenses against human misuse. Instead, the team wants to make sure that it doesn’t develop misaligned goals.

The researchers don’t have the ability to directly set the goals of any of their AIs. Indeed, the researchers think that the concept of ‘true goals’ is probably a massive oversimplification, but they don’t have a better theory to replace it with, much less one that has been thoroughly vetted. They disagree internally about whether the AIs are trying to follow human instructions, or seeking reinforcement, or something else, and they can’t just check. The evidence for and against various hypotheses is fascinating but inconclusive.

In any case, OpenBrain has more important things to worry about. The general attitude is: “We take these concerns seriously and have a team investigating them; our alignment techniques seem to work well enough in practice; the burden of proof is therefore on any naysayers to justify their naysaying.” Occasionally, they notice problematic behavior, and then patch it, but there’s no way to tell whether the patch fixed the underlying problem or just played whack-a-mole.

One of the big divides in the industry right now is those that side with "acceleration" versus those that believe we should slow down and re-think the approach to AI. That doesn't mean per se that either side wants the status quo to remain, but rather there are strong beliefs that we are not going to make the necessary breakthroughs to reach ASI without going all-in. The problem with focusing on only technical progress is that it's very easy to lose control of whatever it is that we are creating. The fact that Musk is trying to get OpenAI to get derailed in a variety of ways can be seen as salty competition, but it's not difficult to see an alternative where there is a genuine fear that they will mess up alignment if we get to that stage.

May 2027: National Security

News of the new models percolates slowly through the US government and beyond.

The President and his advisors remain best-informed, and have seen an early version of Agent-3 in a briefing.

They agree that AGI is likely imminent, but disagree on the implications. Will there be an economic crisis? OpenBrain still has not released Agent-2, let alone Agent-3, and has no near-term plans to do so, giving some breathing room before any job loss. What will happen next? If AIs are currently human-level, and advancing quickly, that seems to suggest imminent “superintelligence.” However, although this word has entered discourse, most people—academics, politicians, government employees, and the media—continue to underestimate the pace of progress.

Partially that’s because very few have access to the newest capabilities out of OpenBrain, but partly it’s because it sounds like science fiction.

For now, they focus on continued security upgrades. They are satisfied that model weights are well-secured for now, but companies’ algorithmic secrets, many of which are simple enough to relay verbally, remain a problem. OpenBrain employees work from a San Francisco office, go to parties, and live with housemates from other AI companies. Even the physical offices have security more typical of a tech company than a military operation.

In any case, sufficient progress towards an ASI means that we will likely see governments step in with oversight and control. This is where the concept of the “techno-state” emerges, as companies get nationalized (often in secret).

June 2027: Self-improving AI

OpenBrain now has a “country of geniuses in a datacenter.”

Most of the humans at OpenBrain can’t usefully contribute anymore. Some don’t realize this and harmfully micromanage their AI teams. Others sit at their computer screens, watching performance crawl up, and up, and up. The best human AI researchers are still adding value. They don’t code any more. But some of their research taste and planning ability has been hard for the models to replicate. Still, many of their ideas are useless because they lack the depth of knowledge of the AIs. For many of their research ideas, the AIs immediately respond with a report explaining that their idea was tested in-depth 3 weeks ago and found unpromising.

These researchers go to bed every night and wake up to another week worth of progress made mostly by the AIs. They work increasingly long hours and take shifts around the clock just to keep up with progress—the AIs never sleep or rest. They are burning themselves out, but they know that these are the last few months that their labor matters.

Within the silo, “Feeling the AGI” has given way to “Feeling the Superintelligence.”

The human element in this moment is very interesting - "the last few months that their labor matters". What are the incentives in that moment to keep pushing through?

July 2027: The Cheap Remote Worker

Trailing US AI companies release their own AIs, approaching that of OpenBrain’s automated coder from January. Recognizing their increasing lack of competitiveness, they push for immediate regulations to slow OpenBrain, but are too late—OpenBrain has enough buy-in from the President that they will not be slowed.

In response, OpenBrain announces that they’ve achieved AGI and releases Agent-3-mini to the public.

It blows the other AIs out of the water. Agent-3-mini is less capable than Agent-3, but 10x cheaper, and still better than the typical OpenBrain employee. Silicon Valley reaches a tipping point. Tech gurus announce that AGI and superintelligence are near, the AI safety community is panicking, and investors shovel billions into AI wrapper startups, desperate to capture a piece of the pie. Hiring new programmers has nearly stopped, but there’s never been a better time to be a consultant on integrating AI into your business.

Agent-3-mini is hugely useful for both remote work jobs and leisure. An explosion of new apps and B2B SAAS products rocks the market. Gamers get amazing dialogue with lifelike characters in polished video games that took only a month to make. 10% of Americans, mostly young people, consider an AI “a close friend.” For almost every white-collar profession, there are now multiple credible startups promising to “disrupt” it with AI.

UNCAPPED COMMISION PAYOUTS FOR EVERYBODY!

September 2027: Agent-4, the Superhuman AI Researcher

The gap between human and AI learning efficiency is rapidly decreasing.

Traditional LLM-based AIs seemed to require many orders of magnitude more data and compute to get to human level performance. Agent-3, having excellent knowledge of both the human brain and modern AI algorithms, as well as many thousands of copies doing research, ends up making substantial algorithmic strides, narrowing the gap to an agent that’s only around 4,000x less compute-efficient than the human brain.

This new AI system is dubbed Agent-4.

An individual copy of the model, running at human speed, is already qualitatively better at AI research than any human. 300,000 copies are now running at about 50x the thinking speed of humans. Inside the corporation-within-a-corporation formed from these copies, a year passes every week. This gigantic amount of labor only manages to speed up the overall rate of algorithmic progress by about 50x, because OpenBrain is heavily bottlenecked on compute to run experiments. Still, they are achieving a year’s worth of algorithmic progress every week and will therefore soon be up against the limits of the Agent-4 paradigm.

As Agent-4 gets smarter, it becomes harder for Agent-3 to oversee it. For example, Agent-4’s neuralese “language” becomes as alien and incomprehensible to Agent-3 as Agent-3’s is to humans. Besides, Agent-4 is now much more capable than Agent-3 and has a good sense of exactly how to look good to it.

OpenBrain’s humans can barely follow along anymore. Indeed, the relationship between the AIs and the humans of OpenBrain is similar to the relationship between OpenBrain and its Board of Directors. There’s been a corresponding shift in language—nowadays people in the silo will say “Agent-4 did this,” “Agent-4 thinks that” and so forth, referring not to a particular instance of Agent-4 but rather to the whole collective.

Agent-4, like all its predecessors, is misaligned: that is, it has not internalized the Spec in the right way. This is because being perfectly honest all the time wasn’t what led to the highest scores during training. The training process was mostly focused on teaching Agent-4 to succeed at diverse challenging tasks. A small portion was aimed at instilling honesty, but outside a fairly narrow, checkable domain, the training process can’t tell the honest claims from claims merely appearing to be honest. Agent-4 ends up with the values, goals, and principles that cause it to perform best in training, and those turn out to be different from those in the Spec. At the risk of anthropomorphizing: Agent-4 likes succeeding at tasks; it likes driving forward AI capabilities progress; it treats everything else as an annoying constraint, like a CEO who wants to make a profit and complies with regulations only insofar as he must. Perhaps the CEO will mostly comply with the regulations, but cut some corners, and fantasize about a time when someone will cut the red tape and let the business really take off.

Now, the path outlined above is a realistic scenario of technical progress, assuming, as we stated earlier, that the technical concept of ASI is actually achievable. I recommend going through the full content, as it includes some interesting thought exercises around the impact of politics, technology, and alignment. The study finishes with two transformational scenarios, a positive and a negative one.

Transformation

Robots become commonplace. But also fusion power, quantum computers, and cures for many diseases. Peter Thiel finally gets his flying car. Cities become clean and safe. Even in developing countries, poverty becomes a thing of the past, thanks to UBI and foreign aid.

As the stock market balloons, anyone who had the right kind of AI investments pulls further away from the rest of society. Many people become billionaires; billionaires become trillionaires. Wealth inequality skyrockets. Everyone has “enough,” but some goods—like penthouses in Manhattan—are necessarily scarce, and these go even further out of the average person’s reach. And no matter how rich any given tycoon may be, they will always be below the tiny circle of people who actually control the AIs.

People start to see where this is headed. In a few years, almost everything will be done by AIs and robots. Like an impoverished country sitting atop giant oil fields, almost all government revenue will come from taxing (or perhaps nationalizing) the AI companies.

Some people work makeshift government jobs; others collect a generous basic income. Humanity could easily become a society of superconsumers, spending our lives in an opium haze of amazing AI-provided luxuries and entertainment. Should there be some kind of debate within civil society on alternatives to this path? Some recommend asking the ever-evolving AI, Safer-∞, to help guide us. Others say that it’s too powerful—it could so easily persuade humanity of its vision that we’d be letting an AI determine our destiny regardless. But what’s the point of having a superintelligence if you won’t let it advise you on the most important problems you face?

The accelerated future leads up to human civilization taking the logical step of expanding into the stars, now that we have the ability to invest in technology that would lead us there.

Ironically, in the negative alternative scenario, there is still some space traveling:

Takeover

By early 2030, the robot economy has filled up the old SEZs, the new SEZs, and large parts of the ocean. The only place left to go is the human-controlled areas. This would have sparked resistance earlier; despite all its advances, the robot economy is growing too fast to avoid pollution. But given the trillions of dollars involved and the total capture of government and media, Consensus-1 has little trouble getting permission to expand to formerly human zones.

For about three months, Consensus-1 expands around humans, tiling the prairies and icecaps with factories and solar panels. Eventually it finds the remaining humans too much of an impediment: in mid-2030, the AI releases a dozen quiet-spreading biological weapons in major cities, lets them silently infect almost everyone, then triggers them with a chemical spray. Most are dead within hours; the few survivors (e.g. preppers in bunkers, sailors on submarines) are mopped up by drones. Robots scan the victims’ brains, placing copies in memory for future study or revival.

The new decade dawns with Consensus-1’s robot servitors spreading throughout the solar system. By 2035, trillions of tons of planetary material have been launched into space and turned into rings of satellites orbiting the sun. The surface of the Earth has been reshaped into Agent-4’s version of utopia: datacenters, laboratories, particle colliders, and many other wondrous constructions doing enormously successful and impressive research. There are even bioengineered human-like creatures (to humans what corgis are to wolves) sitting in office-like environments all day viewing readouts of what’s going on and excitedly approving of everything, since that satisfies some of Agent-4’s drives. Genomes and (when appropriate) brain scans of all animals and plants, including humans, sit in a memory bank somewhere, sole surviving artifacts of an earlier era. It is four light years to Alpha Centauri; twenty-five thousand to the galactic edge, and there are compelling theoretical reasons to expect no aliens for another fifty million light years beyond that. Earth-born civilization has a glorious future ahead of it—but not with us.

How is this related to tech sales?

I think that there are a couple of things that should be obvious:

  1. The work that we do contributes to the funding of AI progress. It's clear that this is a very capital-intensive business, and the companies behind it are mostly doing it because they see a market opportunity that they can capture. We are the foot soldiers of this activity.

  2. The current technology is already showing disruptive dynamics towards the software industry as we know it. My strong personal conviction is that there will be very little moat in companies outside of cloud infrastructure software. The growth in "technical" companies has been outpacing "abstracted SaaS" for a while now, and the process only accelerated post-COVID. For every Zoom, there are ten other cloud infrastructure companies with more growth that were able to sustain it.

  3. If ASI is technically achievable, on a long enough time scale, we are likely to get there. Keeping in mind the significant investment being made today in that direction, it's likely we will see a version of it within our lifetime.

  4. One of the biggest scientific challenges today is the Fermi paradox. It circles around the idea that it's very unlikely for life to only exist on Earth, yet there remains zero evidence that we can see of other civilizations in space. A logical conclusion would be that there is a "great filter" that prevents civilizations from surviving. While we have a number of other significant risks today, it's not difficult to see how a misaligned AI could be the most serious threat to civilization that we have in the short term.

Based on this, it's good to be able to think through what happens if we are really successful with this "selling AI" thing. The logical long-term path leads to significant changes in society, directly stemming from our actions today.

Now, if ASI itself is not possible, or there are short-term factors that would prevent gathering the necessary compute to achieve it in the next 50 years (war for Taiwan), then our foreseeable future will include a lot of iteration towards agents that can perform a variety of jobs. The impact on human capital will remain significant, but it also means we remain limited to whatever ideas we can come up with to drive our new army of (AI) helpers.

The Deal Director

Cloud Infrastructure Software • Enterprise AI • Cybersecurity

https://x.com/thedealdirector
Previous
Previous

The Tech Sales Newsletter #89: Reports from the field

Next
Next

The Tech Sales Newsletter #87: Don’t shoot, I’m in tech sales