[Allen] So David, it’s interesting. Fraud is up. I brought that up, I think maybe two weeks ago, I’d have to pull up my notes. But there is an increase in fraud and there is AI being used to do it, but AI is also creating fraud. And it’s happening, everywhere. Some of the most recent largest security issues that we’ve had, and breaches have been due to AI and impersonation of people and things like that. The problems are increasing for sure.
[David] and then there’s another case, another wire fraud that happened where just, I’m telling everyone, just be really careful how you’re timing up. We’re not talking just mortgage fraud in the sense of someone defrauded us with a tax return or something of that nature that’s going on. But it’s the wire fraud, it’s the more sophisticated, it costs a lots of money. Big scale. Yeah. They’re going after the big bucks in the absolutely. AI product.
[Allen] So here go, here’s what’s interesting. Because the way we went into my segment, I was gonna skip the joke, the funny part, but it actually fits in great with what you said, Marc. So there’s a thing in AI called hallucination. Yeah. And there’s a lot of issues around people building AI engines, and then they just hallucinate they do what they want. And there are some new things that have come out recently. Even OpenAI has released something new where these new models have less hallucination and you don’t have to code it to do less hallucination. But again, it’s not perfect. So, get this and I’m gonna tell you this just because it fits well, Air Canada and last week you brought up customer, we all brought up customer service. So, I wanted to focus a little bit on that this week ’cause it is such a big area. But anyways, here’s what happened. Air Canada had an AI chat bot on its website. A customer said, hey, I need a bereavement. Fair refund. The chat bot made up a policy that did not exist and told the customer they were qualified. Air Canada later denied the refund and argued the chatbot is not authoritative. Air Canada’s court rejected the argument and ruled the airline was responsible for what its AI said, and Air Canada had to pay the refund. This is a real story. So, the reality is that’s good. You’ve gotta get a handle on. You can’t have those shortcuts. You’ve gotta understand what the AI’s doing. You have to have people testing it. You have to include all the necessary pieces. AI is not a perfect science yet. So that was an interesting one. Let me tell you about the AI regulation update. David and Marc and folks. Yes. So, it’s basically an executive order that’s intended to block states from enforcing their own AI regulations, positioning for one central source of approval. That’s the big ticket here because could you imagine being a vendor and having to adhere to 50 different states as far as what you have to support in order to be within regulation? It’s almost like Nexus and taxes, right? How difficult is that to manage? So, the order being discussed is aimed at preventing a patchwork of state level AI rules by asserting federal primacy over all AI governance. The intent is to stop those states that we just talked about from enforcing their own regulations that could conflict with federal standards and instead be funnel approval oversight and enforcement through a limited set of federal agencies. So, it’s obviously more than just for the vendors. There’s a more governance type of rule here, but I do the fact that it’s one rule across the board. So, what we should expect is an increased emphasis on explainability. This is really, comes down, this is important. Audit logs, model governance, and human in the loop controls. So just like you think about, everything that we can’t just automate and say it’s fraud list because we’ve got AI running. What is the audit trail? What has it done and can you prove it? Same with what happened with Air Canada, right? What, why did it hallucinate and create its own, it didn’t have a response for those rules, and it created its own. So, we gotta be careful. So, the fact that you now have to a, be responsible for what is the engine is outputting, is huge. So that’s good governance coming for lenders and vendors. And so, I wanna move on to some other things. There’s a servicing workflow AI new item in the news, but just Dave, let me pause and to see if there’s any further context on that.
[David] I think one of the things you’re talking about is the human in the middle is there’s human in the transaction has got to be, I don’t see us getting away from that, although we’re doing more and more. Pavan and I recorded a podcast while we’re together over the weekend, and we talked about deterministic how this much of this AI is really deterministic and we need humans in the middle making critical decisions on defense and on things that have significant consequences. I should try to bring up my notes from that, but, AI is not gonna replace the human when it comes to certain critical decisions. They need to be in there. And this came from Dr. Alvin, General who’s chief of staff for the Air Force at Alvin. He’s saying, we’re seeing that the human has got to be involved, but we needing to speed up the decisions where the human thought is working at the same speed as the transaction. In other words, human thought. That’s not going to create delays. It creates assurances that the AI technology is making good decisions that in the case of Air Force, that could result in people being killed. So, it is mortgages, you make wrong decisions. You make a lot of using AI and a company can be killed and a lot of people can be outta work. There’s human in the middle.
[Allen] Yeah. And it’s interesting, there’s a lot of, if you think about where the AI’s come from, as we talked about it in our industry, right? It’s, it started with these little bots that helped loan officers to now, getting into really true data and analytics and understanding. What borrowers are doing and more appropriate ways to reach out to them. But you still need that human in the middle. It has to be approved. Can you imagine shooting off a number of responses to people and targeting them with too much email and too much automated messaging because it hallucinated and came up with its own idea of what it needs, or it didn’t have the right data point. You don’t want be there either. And David I have some other stuff I’ll bring up next week, including maybe a little bit more in the, there’s some AI in servicing, which is really just a vendor announcement about cost reduction and operational consistency. But last week I brought up the AI’s like 51st dates. You have to, it’s the 51st date syndrome. You have to remind AI about the things that are important that you’ve talked about. So instead of just continuing a conversation forever, like some people will just have a chat thread and just keep talking in the same thread over and over again. There’s a context window of how much it’s allowed to remember or can remember. And so, it can go back and look for in history, but in memory it doesn’t remember all of that. And so that’s one of the problems that people have. Anyways the other problem that people have is that it just tells you’re great and wonderful and terrific. And so there are two prompts that you can put in your personalized instructions in GPT that will stop it from telling you your, all your ideas are fantastic. I’m gonna give you both of them. We can provide them to our listeners. If anybody wants to just shoot us an email. The first one is, be my ruthless mentor. My ideas are not great. Tell me why you need to stress test everything I say and ask you for, and get things to the point to where it’s bulletproof and it makes sense. And so there’s a lot of people that responded about that, where I got the source, I got that from. That, that’s worked great. And then here’s the next one, system instruction, work and absolute mode, eliminate emojis, filters, hype, soft, ask conversational transitions. And all call to action appendices. Assume the user retains high perception of faculties and not everything that the person may respond with is accurate. Please do not… The last piece is cut off, but I think the last piece right there says please do not tell me I’m great all the time. And there’s an article online as well. I’d have to find it. That talks about the engines the, all of the systems were all programmed. The way that they generatively speak is to make you happy, make you feel like you’ve got a great idea and you’re smart. And then they embellish upon it.
[David] Yes. And they keep getting you back because they do that. One of the things that I was saying it’s HITL, human in the loop. I said human in the middle. It’s human in the loop. We’re gonna see more of that as it comes to transactions. And AI, so it’s an integrated solution and very interesting. And it’s also one of the things I learned more about this weekend, Allen, was transactional language model versus the LLMs which are probabilistic versus the, when you have the human in the transaction. It cross-references things and it brings them in. If there’s, it falls outside of an answer that they’re, that’s being offered up. It was just fascinating. It’s a white paper that’s coming out. We talked a little bit about this on a separate podcast. I’ll be sharing this white paper with you, Allen, to get your comments for future discussion, but it’s really interesting what’s happening. Everything is speaking to the computer. The WIMP is, it’s called wimp It, which is the, written information icons, menus and pointers. Whip. Whip. Oh, sorry. Yeah. Decades. Decades of reliance on wimp, which is Windows, icons, menus, and pointers. That’s all going away. And we’re going into conversational AI. And then yeah, we’re gonna have to.
[Allen] Have a, look at GPT Atlas, right? You can, Google, I’m sure they’re shaking in their pants. And the days of volleyball in the courtyard and riding bikes from the parking lot to your office and drinking smoothies is over. There’s a significant amount of people that are asking GPT their questions and that’s why GPT Atlas is now a big deal. And we’re gonna see a contention between all these big AI models and people asking questions.
[David] There is a new world out there and how do you stay in touch with it? Stay in touch, get ahold of Allen. Allen, that’s one of the things you do consulting in this area. You have a company that helps people through this, correct?
[Allen] That is correct, yes.
[David] Good. And we’d love to chat with you. Yeah, good deal. Appreciate it very much. Anything else, Allen?
[Allen] No. There’s just so much going on. In the world of AI that it’s a lot to keep up with. And I’m sure some folks in our industry are focusing on business and have their blinders on, which is not a bad thing. I think it was David Kittle or someone else brought up a topic that I was gonna talk about this week. And it had to do with, before you make a change to the user journey you have to do analytics on the user journey. Somebody on our podcast said that, and so I have a whole piece that I did some research on about why to do analytics on the user journey before making a change and just signing up the next vendor. We’ll talk about that more next week.
[David] That’s good. That’s very great. Great. Tease it up for next week. Thank you, Allen. Appreciate it very much. Thank you everybody for your contributions this week. Good. Good podcast. A lot of good con content. Find out if we’re gonna get back to quantitative easing. Interesting what’s happening, but more importantly, if you get a chance, wanna go watch. It’s not a Christmas movie, but go watch the Big Short again. Tell me if I’m wrong. There’s some things showing some, similarity to the last crisis that we had. I think we’re heading in that direction. Hopefully not to that magnitude.
Allen Pollack
, Chief Operating Officer, Tech Consultant
Allen Pollack, a Mortgage & Financial Services Technology Advisor, is a subject matter expert in the mortgage origination process along with software product management and software development.
In today’s financial services push to all things Digital, Allen has been helping lenders and financial services solution providers align their digital transformation and technology strategies by removing the human element of risk, and automating processes that drive efficiencies and margins into profits.
Over the course of his career, Allen has co-created and developed technology business models that have birthed highly successful, innovative solutions and companies.
Allen co-founded and served as CTO of New York Loan Exchange (NYLX), a loan product eligibility and pricing engine (PPE) that made an immediate impact on the industry, scaling the company quickly and forming partnerships with multiple mortgage and financial lending companies. In 2012, Allen was a co-founder of a merger between NYLX and Aklero Risk Analytics that created LoanLogics, A Mortgage Loan Quality and Performance Analytics company. Allen served as CTO where he continued to bring new and innovative product solutions to the market that made a significant impact to mortgage lenders that reduced risk, scaled business channels, and grew profits in a very competitive and highly regulated market.
Allen is also is mortgage and finance technology contributor on a weekly live industry podcast, Lykken on Lending, and is launching a new podcast soon to be released, TechStack Radio, dedicated to technology and innovation in Financial Services.