Risky Science Podcast

A conversation with Daniel Schwarcz, professor at the University of Minnesota Law School, where he teaches insurance law, contract law, tort law, and financial regulation and his academic work sits at the intersection of AI governance and insurance regulation.

  • (00:00) - Introduction
  • (00:17) - Guest background: From P&C attorney to insurance law professor
  • (02:13) - AI in insurance today: back-office efficiency vs. underwriting and claims
  • (10:06) - Is AI "locked and loaded" for underwriters and claims departments?
  • (12:24) - The 50-state regulatory problem and its compounding complexity
  • (22:05) - Catastrophe modeling and AI in property underwriting
  • (30:19) - Why disclosure usually forestalls regulation rather than protecting consumers
  • (38:40) - Schwarcz's proposed fix for shadow insurance
  • (43:40) - "Obamacare for Homeowners Insurance": the case for insurance exchanges
  • (48:56) - Five-year outlook: where is the insurance industry headed?

Creators and Guests

Host
Christopher Westfall
Editor, Owner Risk Market News
Guest
Daniel Schwarcz
Professor Daniel Schwarcz Fredrikson & Byron Professor of Law at the University of Minnesota

What is Risky Science Podcast?

The Risky Science Podcast features conversations with scientists, insurers, investors, portfolio managers, and others about the evolving science of predicting and modeling risk across both natural and man-made perils.

Christopher Westfall
00:17
Professor Schwarcz, thanks so much for joining me today. I think you have a lot of information and a lot of thoughts about many of the subjects that the podcast and Risk Market News covers, especially around insurance and AI. But I always like to start with a little bit about yourself and what you're working on right now.

Daniel Schwarcz
00:46
Sure. So I'm a professor at the University of Minnesota Law School. I've been working in the insurance space for about two decades now in a variety of capacities. For a while I was actually a practicing lawyer working for insurers. I've been an expert witness on a pretty broad range of issues. I teach insurance law and a range of related topics — contract law, tort law, financial regulation, and health insurance. And I also have a longstanding interest in AI and the ways in which it is and could shape both the insurance industry and the laws and regulations surrounding it.

Christopher Westfall
01:27
Just for clarification — what part of the insurance industry did you work in? Was it health, P&C, or something else?

Daniel Schwarcz
01:38
When I was actually practicing law — which was not for a terribly long time — most of my work was in the P&C space. I would say that most of my experience is centered there, but I've done a decent amount on health insurance as well, and less on life. Even in the life space, though, I've done a fair bit on the financial regulatory side, the solvency side. I like to say that anything in the insurance industry is something I'm potentially interested in — pretty broad-ranging interests.

Christopher Westfall
02:13
Perfect — the perfect person to speak to. I wanted to start with something I come across a lot in my conversations and in writing about the industry and how it interacts with the rest of the markets and models. I came across some of your recent work around AI, and if I understand it correctly, a lot of that work talks about productivity increases in legal and insurance settings. But there are still risks — hallucinations, persuasion problems, and the black box criticism that's common around models. When I listen to earnings calls and read transcripts from insurers, they tend to talk about AI in terms of customer service and back-office efficiencies. So my question is: have you thought about what happens when AI starts interacting directly with underwriting and claims — when pricing and coverage decisions are directly affected by AI?

Daniel Schwarcz
03:44
Yes. As you alluded to, a lot of my recent work has been looking specifically at how AI is currently and potentially impacting the practice of law — though that obviously intersects with the insurance industry generally and particularly with the P&C sector. Once we start talking about AI in claims handling and pricing, these are actually some of the issues that initially got me interested in AI more than a decade ago.
I think there's still a tremendous amount of uncertainty in this space, and much of it stems from regulatory and legal uncertainty. Insurers still face a lot of open questions about when and how they can use AI for underwriting, claims handling, and fraud detection. That's creating real caution — even as the potential is clearly enormous. The value proposition on the underwriting and rating side has been obvious for probably a decade. Machine learning techniques are simply much better at making predictions than traditional human approaches. But there are significant regulatory concerns that have limited adoption, and we're still trying to work through them.
On the claims handling side, health insurers have been at the leading edge of using AI and algorithmic tools to handle claims — and it's creating major legal and regulatory challenges for them. So while there are certainly insurers using these techniques in certain settings, there's still a lot of caution, with guardrails like keeping a human in the loop or limiting AI to affirmatively approving coverage but not denying it. From what I've seen, companies are still not sure exactly how much they can automate or how aggressively they can do so. But increasingly, the value proposition is becoming more obvious, and more insurers are starting to experiment and push for greater regulatory and legal clarity.

Christopher Westfall
06:42
Do you think the AI systems right now have the capacity to make those kinds of judgments? It sounds like insurers are waiting for a signal from regulators or market watchers before moving ahead. Are they confident in the technology itself?

Daniel Schwarcz
07:03
Well, AI is such a massive umbrella, and there's also a massive set of functions we could talk about.

Christopher Westfall
07:12
Right.

Daniel Schwarcz
07:12
So it's hard to answer at that level of generality, except to say: yes. There are many, many domains in insurance in which many types of AI tools have the technical capacity to make things faster and better. They also create risks — and that's the real problem.
Take fraud detection, for example, which I understand many insurers are already engaging with — using a human in the loop to flag suspicious claims. It's been clear for some time that predictive AI and machine learning techniques — not even generative or agentic AI — can be very effective at flagging potential fraud when you have solid historical data. Are there risks? Absolutely — they can reproduce bias from training data, and those risks need to be managed. But the capability has been there for a while.
And when it comes to the latest generative AI and large language models, it's absolutely clear that they can perform tasks like certain claims determinations and drafting correspondence. But again, there's significant regulatory risk, bad faith exposure, and reputational risk. So I think we're at a point where it's clear that many functions historically performed by humans or simple algorithms can be substantially impacted by AI. The real question right now is whether the risks are worth the reward — and different players across different industry segments are answering that question very differently. Some firms are leaning hard into AI across a broad swath of their operations. But many of the larger players are taking a more cautious approach.

Christopher Westfall
10:06
There have been a lot of articles recently about AI's effect on different industry sectors — every week there seems to be something about AI wiping out real estate brokers or insurance brokers. Is your argument that AI is essentially locked and loaded for underwriters and claims departments, and it's just waiting for insurance executives to get sufficient regulatory confidence to pull the trigger?

Daniel Schwarcz
10:38
These are very difficult questions and I don't have a definitive answer. They go far beyond insurance — they apply across the economy. And beyond the legal and regulatory dimensions, there are also questions about whether customers will accept non-human AI interactions, and there's just general organizational inertia.
It's increasingly true that cutting-edge AI tools can do many of these jobs, and frankly, can do many of them with very limited human involvement. The same is true in law — for certain types of lawyers, AI capabilities are clearly advancing, but we're not seeing mass changes in the legal profession. This gap between AI's capabilities and actual diffusion is persistent everywhere. I don't think insurance is unique in this regard, though it may be more of a laggard because of the extent of its regulation and the complexity of the legal questions involved.
Honestly, my gut feeling is that we'll see this technology diffuse pretty broadly in the next couple of years — not just in insurance, but across a wide swath of professions. But that's an educated guess based on what I've observed and what I understand the technology to be capable of.

Christopher Westfall
12:24
Is the regulatory aspect made even more complicated by the fact that it's a state-based system? You'd potentially need fifty different approvals. Does that play into it?

Daniel Schwarcz
12:37
It's absolutely an issue. It's not just the regulatory complexity, but also the legal complexity — bad faith laws, ERISA, depending on what type of insurance we're dealing with. And I want to be clear: I'm not saying these barriers simply need to be knocked down. In many cases they exist for good reasons.
The context where this has played out most publicly is health insurance, where we have major lawsuits against large insurers that have used AI and more traditional algorithms in their claims handling — quite publicly. There are real questions raised by that: whether there are biases in the training data that get reproduced, whether the AI is calibrated to ignore individual circumstances when insurers may actually have legal obligations to consider them, and whether the doctors or claims handlers approving hundreds or thousands of claims are actually meeting their contractual and regulatory obligations.
On the underwriting and rating side, there are also real concerns — something I've written about and thought about for some time. Why is the data predictive? Is it predictive because it's proxying for things like race or income — factors that are legally and regulatorily off-limits for very sensible reasons?
And it's not just a matter of having unclear laws. I've long believed that states have not done a good job of laying out exactly what constitutes unfair discrimination when insurers use AI. They tend to fall back on platitudes without providing clear, unambiguous standards. That means risks need to be managed without adequate guidance, and outcomes depend heavily on who the regulators are and the firm's risk tolerance. I do think insurers are more exposed because of that environment — and because the underlying issues are genuinely difficult.

Christopher Westfall
15:27
If I understand your research correctly, you've looked at the financial sector and argued that something like a fiduciary standard should apply to AI-generated decisions — similar to the fiduciary standard we hold human advisors like CFAs to. Should that kind of 'machine fiduciary' concept apply to insurers using AI-driven underwriting and claims? Or is that too early in the day?

Daniel Schwarcz
16:19
The article you're referencing is specifically about sales — using generative AI tools in that context. The core concern there is significant: generative AI is already quite capable of facilitating the sale of insurance products, essentially replicating what an insurance agent or broker does. And the regulatory landscape is enormously complex — fragmented not only across states, but across types of insurance. Fiduciary duties don't typically apply to many types of insurance agents, so the rules are complicated even before you bring AI into the picture.
We currently have very few regulatory efforts addressing how the rules should adapt to a world where humans are either using AI tools to provide services that agents and brokers traditionally supply — or where those agents and brokers are being replaced entirely. Our argument is that when generative AI tools are used to make product recommendations, a fiduciary standard, or something like it, is appropriate.
The basic reason is that the risks are substantial. If there's one thing generative AI is very good at, it's persuasion — personalized persuasion at the individual level. It's relatively straightforward for bad actors to use these tools to replicate the same problems we worry about with human agents: mis-selling, maximizing commissions at the consumer's expense. These concerns are heightened in certain settings, like life insurance, where some products are genuinely unsuitable and commission-driven sales can be a serious problem.
Meanwhile, the benefit of using generative AI for tailored recommendations is actually quite limited — for most ordinary consumers, you can use pretty simple rules-based algorithms to identify what type of insurance they need, which is what robo-advisors have traditionally done. So on the sales side, there's both enhanced risk and limited additional value proposition — which is why a more aggressive regulatory posture is probably appropriate. We haven't seen that yet, and honestly, right now we just don't have many answers to any of these questions.

Christopher Westfall
19:45
Do you think the industry is even thinking about this yet? Or is it still early days?

Daniel Schwarcz
19:56
Many are thinking about it. It depends on what we mean by 'it,' though. I don't know that many insurers are actively contemplating replacing their entire agent and broker network with generative AI systems — though some are probably starting to think along those lines. What I do know is that many, if not most, insurers are thinking about AI across a range of other contexts and wrestling with how to integrate it responsibly while managing regulatory risk.
But so much depends on the type of insurance, the segment of the industry, the specific use case, and how the AI is being deployed. Is it purely internal? Is it a support tool for employees? Is it automating certain elements? Is it waving through easy cases and flagging hard ones? And beyond that — how do you convince a regulator you're doing this responsibly? How do you audit these systems? What governance mechanisms ensure they don't go off the rails?
Right now, a lot of the rules are quite vague — it's all governance frameworks and auditing requirements, broad principles that create tremendous uncertainty. If you want to use AI aggressively, you're taking on significant legal and regulatory risk, and also significant legal bills, because you need lawyers to help you navigate all of this. So how the field evolves over the next few years remains genuinely unclear.

Christopher Westfall
22:05
I want to switch back to underwriting for a moment. Catastrophe models have been around for a while in the P&C space, allowing insurers to make risk-based decisions for particular perils and geographies. If I understand it correctly, AI can supercharge this — not only in P&C but across the underwriting cycle. Insurance law generally permits risk classification of this kind. But have you thought about what happens when AI can detect hyper-granular traits within a group? Does that start to break the actuarial concept of pooling risk across groups? Or does the question make sense?

Daniel Schwarcz
23:19
It makes sense, but that's not actually where my concern lies. Getting better risk assessment is a real benefit of AI. Catastrophe models can be and are being improved significantly — they can incorporate far more data. That strikes me as a clear positive.
There are even some regulatory questions there. California, for instance, has tried to limit or regulate insurers' use of these models to ensure they operate fairly. My view is that we generally shouldn't be regulating that element of the process — not when we're talking about measuring individual-level climate or property risk rather than proxying for legally suspect factors like race or income. For property-level climate risks specifically, AI has tremendous potential.
And there's another benefit: AI actually allows firms to differentiate and compete. It used to be that there were one or two catastrophe models everyone had to use, which created concerns that pricing was being driven in ways that weren't necessarily warranted. As AI tools that facilitate this kind of decision-making proliferate and become more widely available, that promotes competition among insurers in how they assess risk at the parcel or individual policyholder level.
Sure, there's a black box element when you're relying on AI and machine learning models. But in my mind, as long as there's competition among insurers and they're not all relying on the same inputs, we shouldn't be overly concerned — unless we think there's unfair discrimination based on race, income, or similar factors. I don't have that concern when it comes to measuring and assessing parcel-level climate risk.

Christopher Westfall
26:01
What about on the health or life side? I can imagine an AI model layered on top of a mortality or longevity model, resulting in prices going up or capacity going down. Is there a fear of consumer backlash as they see those changes driven by the models?

Daniel Schwarcz
26:39
There certainly is, and the context is quite different from property. In health insurance, insurers generally can't take individual-level risk into account at all, with very limited exceptions — and for good reason. In life insurance, there's a real concern: we prohibit race-based underwriting and rating, even though race turns out to be quite predictive of longevity — even after controlling for other factors, race remains statistically significant in most contexts.
The difficulty is that if you just throw a general-purpose AI model at all available data and ask it to predict longevity, you'll get more accurate predictions — but those predictions will be more accurate partly because they're proxying for race. There are a million different ways to proxy for race, and you might not even be aware that your model is doing it.
By contrast, in the property context — measuring catastrophe risk, wildfire exposure, that kind of thing — the balance of factors looks very different to me. It's incredibly important to measure those risks correctly, because accurate pricing sends an important signal to homeowners about mitigation. Climate adaptation and resilience are among the core challenges we'll face as a society over the next several decades. Getting the pricing right is fundamental.
So much of this is context-dependent, and that's one of the real challenges across the insurance industry: simple, uniform rules don't work well. I feel differently about AI in life insurance underwriting than I do about catastrophe modeling. There are legitimate concerns about using machine learning as a major part of the life underwriting process, and I think a lot of people in the industry would share them.

Christopher Westfall
29:38
From my own familiarity with discussions about models, the standard answer to concerns about black boxes and opacity seems to be: disclose, disclose, disclose. Do you think there's an effective way to do that, especially as these algorithms become more complex? Is there a legal standard around disclosure that you think is actually adequate for describing what these models are doing?

Daniel Schwarcz
30:19
Again, it's highly context-dependent. Disclosure is a strategy that is often used because it's relatively cheap — and it usually doesn't accomplish much, which is sometimes exactly what players in the industry want. In many settings, disclosure simply doesn't address the underlying problem.
Take the life insurance example we discussed — using machine learning to predict longevity and price or underwrite accordingly. Disclosure doesn't solve the problem if the underlying issue is that you're charging people rates based on factors society has decided insurance shouldn't take into account. No amount of disclosure fixes that.
In health insurance, telling a policyholder that an AI helped determine their claim was not payable at least gives them some basis for understanding the decision — but most people won't read the disclosure, won't know what to do about it, and won't be in a position to act on it. The same goes for sales: it's very easy to include a disclosure that nobody reads. The empirical evidence across a wide range of contexts is pretty consistent: most consumers don't read disclosures, don't understand what they say, and disclosures have little effect on behavior.
Now, that's not to say disclosures can never be one component of a broader, effective regulatory scheme. But in my view, most AI risks in most insurance settings are not susceptible to a disclosure-only solution. More often than not, disclosure is a way to forestall more serious regulatory action rather than to actually protect consumers.

Christopher Westfall
32:37
Another aspect of your research touches on solvency and regulatory capital. In earnings calls and investor updates from insurers, they often discuss AI in the context of pricing, reserving, and scenario analysis — which is common not only in insurance but across financial services generally. Does that use of AI raise the same opacity concerns? Does it strain the capital framework that insurers operate under, given that solvency regulation tends to be fairly standardized?

Daniel Schwarcz
33:32
The context is really different there, and I think using AI for scenario analysis, stress testing, and balance sheet analysis is absolutely appropriate. In most contexts, that's not going to undermine the effectiveness of solvency rules, which are fairly formulaic and quantitative. AI will layer some additional scenario analysis on top of those rules, but I don't see it as posing a meaningful risk on the solvency regulatory side.
In fact, I'd go further: I think enabling insurance regulators to use AI tools to identify struggling firms earlier would be a real benefit. Right now, state regulators rely on various formulas and ratio analyses to flag firms that may need intervention. Those are quite imperfect tools — they can be gamed and may not reflect the actual condition of a firm. AI could meaningfully enhance the regulatory process here.
And on the insurer side, I'm not particularly worried, because AI can't be used to calculate reserves or asset prices — we have pretty clear accounting rules governing those. So the ability to 'game' the solvency framework using AI is limited. I never say never, but for right now, I don't see much regulatory risk on the solvency side. I see a lot of opportunity — for regulators and for firms — whether it's better scenario analysis, asset-liability matching, or gaming out different catastrophe scenarios and cash flow implications. The regulatory and legal concerns there are not as significant as in other areas.

Christopher Westfall
36:20
Along those same lines — if I understand your work correctly, you've argued that the concept of shadow insurance obscures the safety-cost trade-off in the industry through complexity and opacity. Is that made better or worse by AI, in your view?

Daniel Schwarcz
36:47
It's hard to say. When we talk about shadow insurance, we're generally talking about insurers transferring risk to affiliated captive reinsurers in ways that take risk off the balance sheet and make an insurer appear safer than it is. There's significant reliance on state insurance regulators to ensure that these transactions are not materially risky and are appropriately collateralized.
One could certainly worry that AI might disrupt that balance in some way. But without really digging into it, it doesn't seem to me like AI is fundamentally changing that risk. I still think shadow insurance is a concern — if there are ways in which the current regulatory system inappropriately accounts for risk or incentivizes excess conservatism in reserving, we should fix those directly. Trying to bandage over those problems by engaging in these kinds of balance sheet shell games is not a great solution. It creates real opacity and real risk of regulatory arbitrage.
So it's a concern I've had for some time — but not one I'd say is particularly elevated by AI, at least in my current thinking.

Christopher Westfall
38:40
Setting AI aside — what's your proposed solution for shadow insurance and the way risk moves back and forth across different domiciles?

Daniel Schwarcz
38:58
I think we need much more rigid, formulaic rules about when insurers can receive credit for reinsuring with commonly owned captives. And we probably need stronger, more rigid collateral requirements — specifically, collateral that isn't just a letter of credit from a parent or affiliate, but is backed by a truly independent financial institution.
More broadly, I would say the real solution is often to simply not give credit for reinsurance when there are meaningful questions about the reinsurer's capacity to pay — and especially when that capacity is correlated with the risk exposure of the underlying insurer. At that point, you're really not getting the risk transfer you're claiming. It's a tricky set of issues, but part of the answer is fixing some of the formulaic rules that may be applied a bit too mechanically, and part of it is having better safeguards around when insurers can get credit for shifting risk to affiliates.

Christopher Westfall
40:27
That makes sense. And as you know, it's something that's been discussed for years without ever seeming to progress beyond the discussion stage.

Daniel Schwarcz
40:37
No, it doesn't. That's true.

Christopher Westfall
40:40
Getting back to the natural risk side — you've argued that homeowners insurance premiums must reflect the true cost of climate risk while also ensuring accessibility, which is something a lot of people discuss. Could AI improve catastrophe modeling and mitigation incentives, or does it risk accelerating withdrawal from high-risk areas — which is something I see a lot of discussion about?

Daniel Schwarcz
41:18
I have a lot of thoughts on this. I would say that most of the solutions here are probably regulatory rather than AI-based. What I really think — and this will take us a bit off the AI track, but to answer your question — is that we should be encouraging more competition among insurers and facilitating consumers' ability to compare and shop. I actually like the idea of insurance exchanges for property insurance, similar to what we have in health insurance, to make comparison shopping much easier.
I also favor some deregulation on the rating side, and I think there's real promise in group insurance for property — similar to what exists in health insurance. A property insurer could issue a policy to a locality on a group basis, and individual homeowners could opt in. That would align the incentives of insurers and localities to invest in climate mitigation and resilience, which is important because a lot of those efforts need to happen at the local level.
So there are a lot of potential solutions, and it's obviously a very important problem. But when it comes to the role of AI specifically, I don't think it's fundamentally an AI problem. I think it's fundamentally a problem of aligning incentives between insurers and communities, and harnessing market forces to promote social goals alongside individual ones. Better catastrophe modeling can certainly help — by promoting competition among insurers who aren't all relying on the same modeler — but I see AI as moving the needle somewhat on the competition front, not as the core solution. The core issues are regulatory and structural.

Christopher Westfall
43:40
This might be a trigger warning for some people in the industry — but is that essentially Obamacare for homeowners insurance?

Daniel Schwarcz
43:56
I actually have an article called 'Obamacare for Homeowners Insurance' — and I know that branding is not universally welcome. But I genuinely think there are a lot of elements of the Affordable Care Act that translate well to homeowners insurance.
I mentioned insurance exchanges, and I think they're particularly important here. One of the biggest barriers to competition among property insurers is that most consumers don't shop for coverage — they find it intimidating and stressful, and it's very hard to make apples-to-apples comparisons. Infrastructure that facilitates comparison shopping would be a real improvement.
On the deregulatory front, as I said, I'd like to see a move away from prior approval rating toward more managed competition — and that is actually a feature of Obamacare. People forget this, but in some ways Obamacare is less regulatory than our current state-based P&C system. Obamacare doesn't have anything resembling prior approval or rate regulation the way many states do in P&C. I think that's a real problem in the current framework.
I'd prefer a model where the government helps facilitate and structure the market but then allows it to operate. And consistent with the ACA approach, to the extent there are affordability and accessibility problems, you address them directly with income-based subsidies — not hidden cross subsidies. Many states' cross subsidies right now actually end up benefiting wealthy people, because you can't differentiate as much as you'd like across regions and risk profiles. That ends up being a poorly targeted subsidy that diminishes the signaling effect of premiums.
Obamacare started out as a conservative idea — an idea about facilitating market forces — and that's how I see the concept applied to P&C insurance.

Christopher Westfall
46:22
Do you think — and hopefully this makes sense — given what seems to be happening in the homeowners insurance market right now, with rapid price increases and significant decreases in capacity, do you think it's already a fait accompli that there will need to be some kind of marketplace or government program to provide that risk-bearing capacity?

Daniel Schwarcz
47:00
I don't think so — there are still a lot of open questions. The property insurance setting is really quite different from health insurance, for several reasons. First, I think there are far fewer barriers to entry in P&C than in health insurance. There are many more competitors, and it's a much more competitive industry because the fixed costs are much lower. More competition is generally good for the market.
The big problem in property insurance is climate — and it's real — but the question is whether we can adjust. I think we can, but we need to take a longer-term perspective than we currently do. Right now the entire P&C industry operates on yearly policies, which is extremely short-term focused, particularly in property insurance.
Health insurance is a very different context for a variety of structural reasons — the consolidation of providers and hospitals, the misaligned incentives up and down the system among providers, consumers, and insurers. In many ways, property insurance is actually more tractable and more amenable to a market-based solution — not only because of better competitive dynamics, but because the fundamental problem should be manageable as long as we take a more long-term perspective.

Christopher Westfall
48:56
You've been really generous with your time. I want to ask one final question — a big, conceptual one that wraps up a lot of what we've discussed today. With the influence of artificial intelligence, the changes in the insurance market, the capacity issues, the pricing issues — where do you see the industry going over the next five years? Will it look anything like it does today?

Daniel Schwarcz
49:27
I think that relative to many industries, we'll see less change in insurance — because of the regulatory and legal environment, and also because of what I'd call the conservative orientation of a lot of insurers. Many insurers are genuinely risk-averse, and while there certainly are insurtech disruptors, there's still a strong bias toward established players.
That said, I expect massive and rapid change across the economy generally in the next five years. And relative to many fields, insurance will be less disrupted. Part of why I say that is actually because the value proposition for AI has always been particularly clear in insurance. The earliest and most obviously valuable applications of AI were in making predictions — exactly what insurers specialize in. And it's still the case that there's significant reluctance and difficulty in fully embracing that potential. That reluctance reflects the legal, reputational, and regulatory constraints, along with a degree of structural conservatism.
But I still expect significant change in insurance, because the changes will be so large and broad across the economy that even a relatively slow-moving sector will be substantially affected.

Christopher Westfall
51:16
Great. Those are my questions. Thanks so much for taking the time today.

Daniel Schwarcz
51:19
Absolutely. Thanks so much.