Actuarial Outpost
 
Go Back   Actuarial Outpost > Actuarial Discussion Forum > Property - Casualty / General Insurance
FlashChat Actuarial Discussion Preliminary Exams CAS/SOA Exams Cyberchat Around the World Suggestions



Reply
 
Thread Tools Search this Thread Display Modes
  #71  
Old 08-06-2019, 08:00 PM
redearedslider's Avatar
redearedslider redearedslider is offline
Member
CAS
 
Join Date: Oct 2015
Posts: 13,993
Default

Quote:
Originally Posted by Vorian Atreides View Post
Oddly enough, some of this is presented/alluded to with the CSPA Exam 3 material.
Glad to hear it!
__________________
Quote:
Originally Posted by Abraham Weishaus View Post
ASM does not have a discussion of stimulation, but considering how boring the manual is, maybe it would be a good idea.
Reply With Quote
  #72  
Old 08-12-2019, 11:44 AM
Actuarially Me Actuarially Me is offline
Member
CAS
 
Join Date: Jun 2013
Posts: 191
Default

Quote:
Originally Posted by Vorian Atreides View Post
Performing "better" is clearly not a selling point for your boss. The more you try to harp on this idea, especially to the AO audience, the more you'll start coming across as "not a team player".

And from a business problem perspective, a viable solution isn't using the "best" model. It is enough to use one that generates a (large enough) profit.

To put your situation in another way:

Suppose your competitors are selling a product for $100 and you have a customer base that is willing to buy that same product from you for $150.

The product costs you only $90 (assuming perfect knowledge of true costs).

Do you sell that product for $99? Or do you sell it for $149? Why?

I know the model isn't the be all end all. I just don't think it's appropriate to break the assumptions behind a GLM without having an understanding of GLMs. I think it warrants a discussion at the very least. If there's time constraints, then there should be a discussion after the fact on why things were changed.

You can't take a factor from MultiRate, that uses minimum bias approximation and then assume it's equivalent to a GLM coefficient. You can't pick factors from one of the other. There are standardized ways to create ensemble models.

It's one actuary making all these decisions and there's no one checking his work (or mine for that matter). People's work should always be challenged and defended. If I made a specific decision in my model, I need to be able to back up my reasoning. You have to be able to tell the story. The whole point of actuaries is to be able to take complex concepts and explain them to people with a non actuarial background. If you're not capable of doing that, you shouldn't be making business decisions.

I'm not saying I'd do a better job. I'm not saying I'm smarter than him. Just saying I want to understand the process so if we get audited (reserving auditors were pretty in depth, not sure about pricing), or if an underwriter comes to me with questions. I need to be able to answer them instead of saying "it's what the manager felt is right".

It's just been a frustrating 6 months. I know it makes me look like not a team player, but it's just been a culmination of frustrations. I don't know why actuaries are making all the data business decisions for a company that doesn't incur any pure risk. I spend most of my time doing data engineering work. Spent a month learning Ubuntu so I can manage an R Studio Connect server only to find out that someone in another office deploys R code in docker images. Being asked to evaluate vendor data and when my analysis doesn't align with expectations, I have to slice the data and find a use case so we can justify contracting with the vendor.

And after all of this, I still don't know the standard procedure to implement a GLM for actuarial pricing. I want to know that so I can at least bring that to him and say "here's what other actuaries are doing" and start a dialogue.
Reply With Quote
  #73  
Old 08-13-2019, 08:42 PM
iwakura42's Avatar
iwakura42 iwakura42 is offline
Member
CAS
 
Join Date: Feb 2004
Location: Hollywood
Favorite beer: Hazy LA IPA
Posts: 216
Post

Actuarially Me, you are indeed quite frustrated, and, based on all this discussion, probably for pretty good reasons on the technical side. Your idealism shines, and that's a really nice, great, thing. But, also, remember an actuary is a business professional with mathematical training, not a mathematician (even working in a business).

Quote:
Originally Posted by Actuarially Me View Post
I know the model isn't the be all end all. I just don't think it's appropriate to break the assumptions behind a GLM without having an understanding of GLMs.
That's probably right.

Quote:
I think it warrants a discussion at the very least. If there's time constraints, then there should be a discussion after the fact on why things were changed.
That's probably right also. If you are not able to achieve getting this before your annual review, that would probably be a good time to raise the overall topic, and come to terms on your relationship with your manager.

Quote:
You can't take a factor from MultiRate, that uses minimum bias approximation and then assume it's equivalent to a GLM coefficient. You can't pick factors from one of the other.
Well, you "can", albeit with low technical confidence. At the end of the day, the business results may be affected one way or another. At least you're not blowing up the mortgage-backed securities market and taking the financial system down.

...

Quote:
The whole point of actuaries is to be able to take complex concepts and explain them to people with a non actuarial background. If you're not capable of doing that, you shouldn't be making business decisions.
That's really management and owners' prerogative. Given your manager's ongoing tenure, he would seem to be held in enough confidence to keep the ship afloat and moving forward

Quote:
I'm not saying I'd do a better job. I'm not saying I'm smarter than him. Just saying I want to understand the process so if we get audited (reserving auditors were pretty in depth, not sure about pricing), or if an underwriter comes to me with questions. I need to be able to answer them instead of saying "it's what the manager felt is right".
Probably you are taking on too much responsibility, psychically speaking. You aren't going to get arrested or court-martialed. Do your best, document your conversations if you concerned about them, and don't drive yourself crazy.

Quote:
It's just been a frustrating 6 months.
I reckon folks on this board hear that and sympathize enough to talk through it with you for a good eight pages or more. That's what this board's here for, sometimes, I guess.

Quote:
And after all of this, I still don't know the standard procedure to implement a GLM for actuarial pricing. I want to know that so I can at least bring that to him and say "here's what other actuaries are doing" and start a dialogue.
Probably the CSPA credential would be the best way to do this.

Good luck. You're doing really well.

iwakura42

Last edited by iwakura42; 08-13-2019 at 08:51 PM..
Reply With Quote
  #74  
Old 08-14-2019, 09:35 AM
Actuarially Me Actuarially Me is offline
Member
CAS
 
Join Date: Jun 2013
Posts: 191
Default

Quote:
Originally Posted by iwakura42 View Post
Actuarially Me,


iwakura42
Thanks. Had a couple sit downs to discuss the situation. I compared the original job description to what I'm actually doing a year later and they understand why I'm frustrated. The company doesn't really have a solid data structure, so a lot of my time is doing data engineering stuff that I'm not equipped for. For instance, I spent a month learning Ubuntu so I can manage an R studio Connect server to deploy code. Then I find out there's a guy in IT that knows how to deploy docker images pretty easily. Edit: For the record, it took me a month because there was a package that required something built from source and I went down a rabbit hole of trying to get R Ubuntu and R Studio Connect to link to the same package.

Then I found out the model I helped with isn't going into production because it's too complex and IT doesn't have time to implement it. That's extremely frustrating because I suggested starting out with a simple model and then working toward frequency/severity and then by peril. That way, I have time to develop and learn from my past mistakes. I learned a lot through the process, but just stinks knowing most of my work was pointless and makes me feel like I failed.

So my options are to step back and do analyst work by streamlining data processes, updating Tableau dashboards and hoping some data science opportunities will open up or looking somewhere else that utilizes my skillset. Considering we're just now looking for a data engineer, I probably won't have any serious projects for a year.

I could sit here and collect checks as an overpaid analyst, but not really what I want to do. This article sums up my experience.

Either way, thanks to everyone for helping me out when I had no one to bounce ideas off of. Thanks for listening to me vent and continuing the conversation despite my stubbornness. I learned a lot from the process, just wish I had something to show for it!

Hopefully my threads will be useful to someone going down a similar path. I don't know if I'll stick to the insurance industry; I think I'm a better fit for consulting (though hate the hours).

Last edited by Actuarially Me; 08-14-2019 at 09:51 AM..
Reply With Quote
  #75  
Old 08-14-2019, 12:51 PM
Vorian Atreides's Avatar
Vorian Atreides Vorian Atreides is offline
Wiki/Note Contributor
CAS
 
Join Date: Apr 2005
Location: As far as 3 cups of sugar will take you
Studying for ACAS
College: Hard Knocks
Favorite beer: Most German dark lagers
Posts: 66,569
Default

I would say that if you learned "a lot" by your experience, then the experience is not pointless.

Any time you're in a new area (regardless of experience), the first several months (possibly the first year), I would say that you're primary mission is learning.

Learning what is currently done. Learning why it's done that way.

Learning what are the true goals of the area you're working in.

Learning about the history of "what has been tried but not implemented".

Most importantly, learning how to best communicate with your peers, immediate supervisors, and the management/leadership team.


As I pointed out earlier, even if you're current knowledge base/experience says that some aspects (or even all) needs to change, wholesale change will be impossible to achieve. That time spent learning all of the other information/history will be important in knowing what things need to be changed first (and that smaller initial changes are going to be far more fruitful than making the big "critical" change) and then bring in additional changes later on.

Note that by starting with the smaller changes, you can learn even more about both the environment and current processes (some things "deeper down" need to be changed before making some of the other "important" changes).

It sounds like your management was aware of the IT limitations, but didn't want to make you feel "bad" for doing "all that work" that won't see the light of day. However, the things you learned now isn't something to devalue. Also, some of your work was likely leveraged as they told someone that a change from the current set up was needed; and where that change should be done; something that their current set up wasn't able to inform them effectively. (Similarly, they wanted to make a particular change and your work provided the needed support to act on the decision).
__________________
I find your lack of faith disturbing

Why should I worry about dying? Itís not going to happen in my lifetime!


Freedom of speech is not a license to discourtesy

#BLACKMATTERLIVES
Reply With Quote
  #76  
Old 08-14-2019, 03:59 PM
nonactuarialactuary nonactuarialactuary is offline
Member
Non-Actuary
 
Join Date: May 2008
Posts: 2,234
Default

Whatís the variable? Is it possible that someone (maybe your bosses boss?) spent a lot of time and effort getting the company to collect data on variable X? If your model excludes variable X because itís statistically insignificant, but people high up in the company have a vested interest in including the variable, you could see pushback. This is the politics of business, and yes, it sometimes leads to suboptimal decision making.

Perhaps a univariate analysis shows that variable X is predictive of loss costs. Further, perhaps that predictive ability goes away when combined with other variables Y and Z in the model, but on its own, variable X is predictive. To the decision makers in the company, youíre proposing a model that omits variable X even though itís clearly important (and they have a nice univariate chart to back them up on that). Your job, therefore, is to communicate to them that no, your model doesnít omit the variable, but rather includes the effect indirectly. You should also look to quantify exactly how much better your model is expected to perform over the course of a typical year in terms easily understandable to business leaders (e.g., combined ratio for the book using your rates vs. combined ratio using the judgmental rates: are they materially different?). Finally, you have to do this in a way that makes your boss look good in front of his boss and doesnít make you come across as arguing over immaterial things. Thereís a lot of delicate maneuvering here, and whipping out pages of T tests, Gini scores, etc. isnít going to advance the conversation when an executive says ďbut what about variable X?Ē

Bottom line, this sounds like a communication problem. Youíve built a model thatís incrementally better than the version the team plans to go with, but youíre having difficulty getting buy-in because youíre unable to communicate why your model is better. As a result, theyíre more inclined to believe the model recommended by the more senior person. Even here on the AO, youíve included a ton of unnecessary detail but havenít clearly communicated why your model works better in a way that an executive can quickly understand. Focus on the communication aspect of it. Thatíll help you more than whatever statistical tests I assume youíre thinking about.
Reply With Quote
  #77  
Old 08-14-2019, 08:29 PM
magillaG magillaG is offline
Member
 
Join Date: Jun 2007
Posts: 3,056
Default

Quote:
Originally Posted by Actuarially Me View Post
Thanks. Had a couple sit downs to discuss the situation. I compared the original job description to what I'm actually doing a year later and they understand why I'm frustrated. The company doesn't really have a solid data structure, so a lot of my time is doing data engineering stuff that I'm not equipped for. For instance, I spent a month learning Ubuntu so I can manage an R studio Connect server to deploy code. Then I find out there's a guy in IT that knows how to deploy docker images pretty easily. Edit: For the record, it took me a month because there was a package that required something built from source and I went down a rabbit hole of trying to get R Ubuntu and R Studio Connect to link to the same package.

Then I found out the model I helped with isn't going into production because it's too complex and IT doesn't have time to implement it. That's extremely frustrating because I suggested starting out with a simple model and then working toward frequency/severity and then by peril. That way, I have time to develop and learn from my past mistakes. I learned a lot through the process, but just stinks knowing most of my work was pointless and makes me feel like I failed.

So my options are to step back and do analyst work by streamlining data processes, updating Tableau dashboards and hoping some data science opportunities will open up or looking somewhere else that utilizes my skillset. Considering we're just now looking for a data engineer, I probably won't have any serious projects for a year.

I could sit here and collect checks as an overpaid analyst, but not really what I want to do. This article sums up my experience.

Either way, thanks to everyone for helping me out when I had no one to bounce ideas off of. Thanks for listening to me vent and continuing the conversation despite my stubbornness. I learned a lot from the process, just wish I had something to show for it!

Hopefully my threads will be useful to someone going down a similar path. I don't know if I'll stick to the insurance industry; I think I'm a better fit for consulting (though hate the hours).
Doing something new at a company is very hard. There is a lot of failure. The important thing is learning how to go outside of your comfort zone to get things working. You spend a month compiling R when it seems like it should only take a few days. And you realize that compiling stuff in the real world is a big pain. Or you realize you've got to re-do all your work from the last 6 months. That's how it goes. It doesn't mean you did anything wrong.

It sounds like this was the first year you did the model. I would consider influencing the rates, and starting to create the needed infrastructure for more modeling, to be a success.

It takes years to set up a data pipeline and models and analysis chains, and integrate them with production. One option is to look at this as an opportunity. The advantage to setting up a system like this is that you learn a wide range of skills that you might not if you are more specialized. If you spend another year helping a data engineer to set up a data pipeline, and really hone your programming skills, then that is valuable experience in my opinion.
Reply With Quote
  #78  
Old 08-14-2019, 08:30 PM
magillaG magillaG is offline
Member
 
Join Date: Jun 2007
Posts: 3,056
Default

Quote:
Originally Posted by Vorian Atreides View Post
I would say that if you learned "a lot" by your experience, then the experience is not pointless.

Any time you're in a new area (regardless of experience), the first several months (possibly the first year), I would say that you're primary mission is learning.

Learning what is currently done. Learning why it's done that way.

Learning what are the true goals of the area you're working in.

Learning about the history of "what has been tried but not implemented".

Most importantly, learning how to best communicate with your peers, immediate supervisors, and the management/leadership team.


As I pointed out earlier, even if you're current knowledge base/experience says that some aspects (or even all) needs to change, wholesale change will be impossible to achieve. That time spent learning all of the other information/history will be important in knowing what things need to be changed first (and that smaller initial changes are going to be far more fruitful than making the big "critical" change) and then bring in additional changes later on.

Note that by starting with the smaller changes, you can learn even more about both the environment and current processes (some things "deeper down" need to be changed before making some of the other "important" changes).

It sounds like your management was aware of the IT limitations, but didn't want to make you feel "bad" for doing "all that work" that won't see the light of day. However, the things you learned now isn't something to devalue. Also, some of your work was likely leveraged as they told someone that a change from the current set up was needed; and where that change should be done; something that their current set up wasn't able to inform them effectively. (Similarly, they wanted to make a particular change and your work provided the needed support to act on the decision).
Reply With Quote
  #79  
Old 08-16-2019, 09:18 AM
Actuarially Me Actuarially Me is offline
Member
CAS
 
Join Date: Jun 2013
Posts: 191
Default

Quote:
Originally Posted by nonactuarialactuary View Post
Whatís the variable? Is it possible that someone (maybe your bosses boss?) spent a lot of time and effort getting the company to collect data on variable X?
There is no peer review outside of me and my boss. The variable was just a ratio of two other variables: Building Value per Sq Ft, in a model that already had Building Value. Two variables with high collinearity without re-running the GLM.
Reply With Quote
  #80  
Old 08-16-2019, 10:11 AM
Actuarially Me Actuarially Me is offline
Member
CAS
 
Join Date: Jun 2013
Posts: 191
Default

Quote:
Originally Posted by magillaG View Post

It sounds like this was the first year you did the model. I would consider influencing the rates, and starting to create the needed infrastructure for more modeling, to be a success.

.
Yeah, it's a bit complex situation. I work as part of the actuarial team, but my "clients" are underwriters. Each actuary here has a group of MGU's they represent and my first year here was working with one specific actuary. He's not technically my boss, but he was the first one to give me projects, so I considered him my manager.

The job description is a senior role where I'd have ownership of the processes. I have experience in building predictive models, just not actuarial pricing models (or data that is over dispersed and zero-inflated).

I accepted the position because I work best in a flat hierarchical environment. When I worked consulting, our analysts' opinions and views were valued and everyone was involved in the process from beginning to end. If the principals found something wrong with the analysis and made changes, they would explain why they made the changes as part of the learning process. It never had that feeling "I'm your boss and you're inexperienced, so this is how it is." In my opinion, that creates a more collaborative environment and everyone feels like their work is valued.

I went into this job expecting the same sort of treatment considering I have experience managing teams, but instead was micromanaged and changes were made without given reasons. When multiple things are changed without reason, it makes me feel that either my opinion isn't valued or I completely blew it. When I brought up these concerns, the responses were basically "I'm sorry you don't agree, but I think it should be this way". This happened continually, down to how a lift chart is calculated. It's demotivating.

The underwriters just wanted to update their model because it's outdated. I spent the first two months of the project researching Actuarial GLMs and learning the data. After that, I spent a month or two doing EDA and building a pure premium model. We presented that to the underwriters and everything was kosher. We didn't implement it because the manager(actuary) decides he wants to do a freq/sev model instead. So I spend another month researching proper ways to do that and building a freq/sev model. We present that to the underwriters and everything is kosher. That isn't implemented because he decides he wants to do a by-peril model split by frequency/severity, so 8 total models.

I spend the next couple months building a by peril model. We already made the loose assumptions that all models are independent because he wants to be able to calculate premium by peril and at this point, we need to get something in production. I create the 8 models, he takes the coefficients and makes selections based on eyeballing it in comparison to MultiRate software. There was no discussion with the underwriters about target markets. There was no discussion about capping debits/credits. This was all done before even talking to the underwriters. During the meeting, the actuary tells the underwriters he "put on his underwriter hat when making these selections". The underwriters express concerns stating they'd rather trust the data and go from there. The actuary insists he did things correctly and the most awkward meeting of my life continues. I express my concerns about not being comfortable with the changes, the concerns were shrugged off as "I'm sorry you feel that way", and so I made the previous thread asking if this is common.

I reef'd the prior thread because co-workers lurk here and it'd be pretty easy to figure out who I am. The post was mostly venting, but people did give good advice

It still never sat well with me and wanted to know what a typical process is for actuarial pricing or the relationship between a data scientist and an actuary. Cause if it's more of a hierarchical relationship, this isn't the industry for me. I am looking for more of a collaborative role. I still learned a lot of valuable things here, but feel bad that I haven't provided any value to the company yet since the by Peril model is canned. If anything, this role made me realize how much I enjoy working in a collaborative environment and gives me a sense of direction where I want my career to go.

Fortunately, it's an employee market right now so there's no lack of opportunities out there. I'm sure I'll find a good fit, even if I have to take a slight pay cut.
Reply With Quote
Reply

Thread Tools Search this Thread
Search this Thread:

Advanced Search
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off


All times are GMT -4. The time now is 04:44 AM.


Powered by vBulletin®
Copyright ©2000 - 2019, Jelsoft Enterprises Ltd.
*PLEASE NOTE: Posts are not checked for accuracy, and do not
represent the views of the Actuarial Outpost or its sponsors.
Page generated in 0.33730 seconds with 11 queries