Agentic AI Is Not Your Teammate
Why property managers shouldn’t buy the hype—and what responsible AI use really looks like.
It’s definitely no secret, I do NOT like Vendoroo1. But I’m going to set aside those non-product-related concerns for the purposes of this article and instead focus on their actual product claims: that agentic AI as it’s currently available will be transformation for your business. To do this, I’m going to critique a particular marketing email that they sent out earlier this month. While I hate to give them any clicks or traffic, it’s only fair that you get a link to read what I’m critiquing.
Setting aside the “every sentence is its own paragraph” writing style that makes me twitch, I have some very real points of contention with the substance and not just the style presented in this article.
The Seductive Appeal of “Agentic” AI
It’s not hard to see why Vendoroo’s marketer (or “chief evangelist” as he likes to call himself) has latched on to the allure of “agentic AI,” because the promises of its uses solve key industry pain points (if the promises actually become reality). The idea that AI can become a “teammate” instead of just a tool to be used by you and your actual teammates is seductive. The author even explicitly highlights some of these pain points, talking about how AI doesn’t have normal work hours, doesn’t call in sick, doesn’t take a vacation, etc. For a business owner struggling to manage people, this sounds great, right?! But I’ll refer you to the below image, which comes from a presentation given by IBM management decades ago:
Back in 1979 when this presentation was given, AI of any sort was just a dream for the distant future. Alan Turing had already thought ahead three decades prior to the possibility of AI and how we could test it for human level intelligence when it finally comes along, but there was no real concept of how any form of AI could actually be developed then. Nevertheless, technology was developing at a rapid enough rate that IBM wasn’t the only one thinking about this issue. Computer scientist Joseph Weizenbaum wrote an influential book a few years prior called “Computer Power and Human Reason” that argued the same idea: computers will always lack necessary human qualities that are needed for important decision-making authority.
So I’ll give the “chief evangelist” some credit for being good at marketing. That is, after all, what he is: a marketer. A podcaster. An influencer. You know what he’s not? A property manager. A maintenance coordinator. A computer scientist. Etc. He’s very good at pushing the right buttons in your psyche to make you think that agentic AI is your savior. But like with all marketing, you need to be highly skeptical, because the job of a marketer is to convince you to buy something whether you actually need it or not.
What is Agentic AI, Really?
I don’t quibble much with the definition presented by the marketing email. Agentic AI is different than generative AI (what you’re used to using with ChatGPT to help you write a blog post, for example) because it actually takes actions on its own. While generative AI can help you write up a work order description, agentic AI can actually create the work order for you and dispatch a vendor to handle it.
But where I part ways with their definition is on some of their more outlandish claims, such as “agentic AI isn’t smarter software after all…It’s an actual workforce you hire.” Setting aside the poor prose, this is just factually incorrect. AI is, after all, just software. It doesn’t matter whether it’s agentic, generative, or general AI, it’s all just software. In many cases the developers themselves don’t understand exactly how the software works (ask ChatGPT about the “black box problem” of AI), but it is, nevertheless, just software. It’s code running on a machine, collections of algorithms, vector databases, etc. It doesn’t know how to reason, it has no capacity for human emotion or empathy, and it can’t even solve truly complex problems. Don’t believe me? Try playing ChatGPT, Claude, or any other system you prefer at a game of poker. It is completely lost and can’t even understand what is happening in the game, much less how to beat even an amateur player. It will “hallucinate” and create duplicates of cards in the same deck, even. To date, the only AI systems that have beaten humans at poker were systems specifically designed for that purpose, not general use machine learning models. There’s a reason that AI advocates love to mention how AI can so easily beat humans at chess: because chess is purely an algorithmic pursuit. The number of moves in a given situation is vast, but it is fixed, and it’s all based on mathematical probabilities. Poker is different, because poker is more human. Systems have to be specifically designed to handle things like game theory, to take in information on how its human opponents are behaving in order to determine “tells,” to vary the size of bets in ways that will influence opponent behavior, etc. In short, poker is a very human game, and AI isn’t very human at all.
And that’s where the main problem is in our profession and how agentic AI can be used. Property management is a very human game. Agentic AI wants to solve a problem by completing tasks, but sometimes the tenant isn’t calling in to solve a problem, they’re calling in just to vent. They already know that you’ve scheduled a vendor to fix the broken air conditioning, but dammit, they want you to listen to them scream and holler about how hot it is for the next ten minutes! A human gets that. A human has empathy and can understand that need and accommodate it. AI, on the other hand, does not. It can try its best to mimic human behavior, but it doesn’t actually understand anything at all. When a tenant is on the phone crying about how they don’t have hot water and the relatives are about to arrive for Thanksgiving dinner, the AI can’t tell a story about a similar situation that happened to them and how they coped with it, because the AI doesn’t actually exist and has never had to cope with anything at all. At best, the AI can lie and mimic human behavior to the best of its algorithmic ability and tell you what the algorithm thinks you want to hear. But in most cases, the AI isn’t even smart enough to do that. It’s just going to try to solve the problem. And in many cases, that’s just going to make the actual human being on the other end of the phone just more frustrated. Especially if they hadn’t figured out that they were talking to AI until they started crying, and then the AI’s inability to properly handle that situation leads it to accidentally reveal that it’s AI (yes, that’s right, Vendoroo’s AI and many others don’t tell people up front that they’re AI, and even go to extreme lengths to try to hide that they are by doing things like playing fake call center noise in the background).
So, is agentic AI really a “teammate?” No, of course not. Teammates are human beings with their own emotions, thoughts, life experiences, etc. AI is software. It’s code. It accomplishes a purpose. Please, let’s stop trying to personify some lines of computer code. Let’s wait until we have something even resembling artificial general intelligence before we start ascribing human characteristics to it.
The Legal Issues
But let’s set aside all of this human stuff like empathy and experience, and focus on another core issue: the legality of it all. Most states in this country have very strict requirements for licensure of property managers. Some states have a separate property management license, and some just use a real estate license, but most states require one of the two. And in most states, the delineation between what requires a license and what doesn’t is based on decision-making and judgment calls. Essentially, if a task requires any sort of professional judgment (what a contract means, what a property is worth, whether a tenant is responsible for a repair bill, etc.), a license is required to make those decisions and recommendations.
All of these AI “pioneers” are basically just ignoring this and going full-speed ahead with the assumption that regulators are going to be perfectly okay with software stepping into this role. I think that’s a pretty arrogant presumption. My best guess is that most state regulators are just going to sit back and allow PMs to use AI to their heart’s content, but the second a consumer complaint is filed about something that AI did, the hammer is going to come down on the PM and the real estate commission is going to look all askance that you even considered allowing AI to do these tasks and make these decisions. The cases will eventually pile up, and before we know it, there will be new regulations on the books specifically telling you that agentic AI in many functions is straight up prohibited. Maybe you want to be the test case for this, but I certainly don’t.
One state has even gotten ahead of this, surprisingly, and released their own bulletin just this month. The state of North Carolina has basically thrown down the gauntlet and told real estate agents and property managers that YOU are ultimately responsible for anything that your AI does, and they have an expectation that you are personally reviewing any output from AI before it goes out to a consumer. Not only that, but they demand transparency. “Clients and customers have a right to know how decisions are being made…” In other words, hiding that your AI is AI isn’t kosher. If you’re using agentic AI, you need to tell consumers that you are up front. This bulletin is just from the North Carolina Real Estate Commission, but you can bet your ass that every other state’s equivalent regulatory body is going to see it the same way.
Vendoroo tries to assuage these concerns by promising you that if something the AI does costs you money, they’ll reimburse it. I’d like to see the fine print on this guarantee, if it’s even in the contract at all, but suffice it to say that it’s ultimately an empty promise, because a lot of this stuff can’t even be quantified. What is the monetary loss that you’ve experienced if a tenant calls in, gets pissed off by the AI, and then leaves a scathing Google review? I’m betting “the Roo’s” lawyers will argue nothing. Good luck winning that case. But forget it even getting that far. What is the loss of goodwill from your tenant and owner base when they feel that the entirety of the personal touch is gone because you’ve used too much AI? Does your churn increase? Does your lease renewal rate drop? Are you even going to be able to figure out whether these changes were attributable to AI?
It is often said that property management is a relationship business, as opposed to real estate sales that is more of a transactional business. I would worry much less about AI in a transactional business (though I’d still worry). But this is a relationship business. You work with the same owners and tenants for years at a time. I have some consulting clients who have worked with some of the same owner clients for four decades. You think anyone is going to have loyalty to AI code for decades? Of course not. You’re just further commoditizing yourself.
What We Need Instead
I’m not here to just be a hater (although I am that)2. I’m here to provide some guidance on an alternative way of going about things. I don’t reject AI, I embrace it. But I embrace it as a way to assist humans, not to replace them. I’m not interested in an “AI teammate.” I like my human teammates just fine, thank you. But I would certainly like to make their jobs a lot easier.
These are some guidelines I think we should all follow for how we use AI in this industry (and most others):
ALWAYS disclose AI up-front; under no circumstances should you take efforts to conceal that a consumer is interacting with AI instead of a person, and you shouldn’t use AI vendors who do
Limit AI automated responses to very basic things, such as sending people an application link to apply for a property, or giving them basic factual information about a home available; everything else should always require a human to review it before it goes out
As IBM said half a century ago, never let AI make key decisions; if it requires a decision to be made, the AI should go no further than making a recommendation to a human who then makes the final decision
Thoroughly test your AI before putting it into practice; Tiffany Rosenbaum has implemented Kindred PM (an AI vendor I happen to like a lot), but she tested it for MONTHS before letting it loose to handle calls on its own, and it still escalates issues to a human when in doubt; when I tested it and asked it a question that involved fair housing issues, it bowed out and escalated to a person, as it should have
Never hand over licensed activities to AI; if your state says that something requires a license, leave it in the hands of a human
Audit your AI regularly; this isn’t just checking for obvious issues, like answering questions incorrectly, you also need to be checking for things like disparate impact: is the AI doing different things for different groups of people?
Do not feed any sensitive personal information into AI platforms
Have an AI policy for your company that defines how and when you allow AI to be used
Have a backup plan; if AI goes down (and it will), you need to be able to step in and handle the things that the AI is usually doing; this is one of the many reasons that you can’t just replace people with AI
Ultimately, I hope our industry arrives at the conclusion that AI is an assistant for human beings rather than a replacement for them. I’m pretty confident that this will happen, because this is still a hyper-local business. Our industry is composed of literally thousands of mom-and-pop PM companies across the country, not giant players. And the reason that’s the case is that this is a relationship business, and people don’t have relationships with giant faceless corporations, and they sure as hell don’t want to have relationships with mindless AI. There will be industry players who attempt to replace entire teams of people with AI, just like we had companies like Castle who thought they could automate most property management tasks a decade ago. It didn’t work for them (the business ultimately failed), and I don’t think it will work for anyone today who is trying to supplant humans with AI. But they’ll try. I would encourage you to not follow their lead. Let’s remain a human-centric business, with AI making our jobs easier and more productive. That’s the path to success.
Open to Work
Are you an experienced PM industry employee looking for work? Or are you a PM company or vendor seeking the best talent? Send me your info and I’ll feature it here! And look forward to future editions where we’ll be featuring some of the best RTMs available!
Newsletter Stats
Here are our statistics for the last 30 days:
45,932 impressions
46.76% open rate
Issue with the highest readership:
7,827 impressions
Seeking Advertisers
We still have plenty of open spots for advertisers for the year, so if you’re an industry vendor and looking to get the word out to our large audience, please visit our advertiser sign-up page here. All advertisers are welcome. Unlike the PMAssist Partner program, advertising is open to all vendors, not just vendors we use at our own property management company. Advertising tends to be selling out about a month in advance, so please plan ahead if you want specific dates for your ads.
Debate Me
Disagree with my take here? Have a different perspective? There’s nothing I love more than a good debate or even just an intelligent conversation. If you’d like to jump on a podcast recording with me to discuss this topic, please let me know!
As a reminder, it has now been 510 days since Vendoroo senior leadership and employees were credibly accused of drunkenness and sexual misconduct at an industry conference without taking any public accountability whatsoever, despite me calling on them to do so for months. I was not there to witness what they have been accused of, but they have not issued any denials to what numerous witnesses have claimed occurred.
In the interest of full disclosure, I want to make clear that I have been accused by Vendoroo of having a conflict of interest. After I called out this vendor for their poor behavior previously, they sent out a secret letter to a handful of industry insiders accusing me of attacking them only because I have a (very) small ownership stake in EZRepair Hotline. They apparently view themselves as a competitor to EZ (I don’t, as EZ follows the model I recommend here of humans assisted by AI). You can decide for yourself whether my small ownership stake in EZ is motivating for me, or whether I simply found their behavior abhorrent. Those who know me certainly won’t have any doubts about my sincerity. I just wanted to make sure you knew about that up front so you can make up your own mind.