I did not plan to sign on for blurk this morning, but in the same way that writing ideas come to me, I had ideas for how to conduct some analysis.
I rarely have extended periods of dedicated time for my second client because of all the day-to-day happenings with my primary client. It's difficult to do the kind of deep analysis work that I did this morning without having solid, uninterrupted focus.
I used my powers of SQL to create some queries to capture data to assess the effectiveness of our solution. The project involves improving the process for handling duplicate claims payments. First, I created summary reports that provide counts of claims by month to show "business as usual" prior to the implementation of our solution. I ran some analogous monthly reports after the implementation to compare the difference. Interestingly, I saw an 80/20 pattern of denied vs paid claims for both sets of time.
Next, I evaluated the counts for claims that were manually processed vs auto-processed. One of the key deliverables for the project is to reduce manual processing to reduce resources and company expenses as well as improve quality. Prior to the solution, approximately 65-75% of claims identified as potential duplicates were manually worked. After the solution, roughly this same percentage of claims are now being auto-processed. That's still too much manual processing, and I am working on additional queries to identify exactly which claims are being touched and why they are being touched.
Not bad for a solid three hours of blurk.
I rarely have extended periods of dedicated time for my second client because of all the day-to-day happenings with my primary client. It's difficult to do the kind of deep analysis work that I did this morning without having solid, uninterrupted focus.
I used my powers of SQL to create some queries to capture data to assess the effectiveness of our solution. The project involves improving the process for handling duplicate claims payments. First, I created summary reports that provide counts of claims by month to show "business as usual" prior to the implementation of our solution. I ran some analogous monthly reports after the implementation to compare the difference. Interestingly, I saw an 80/20 pattern of denied vs paid claims for both sets of time.
Next, I evaluated the counts for claims that were manually processed vs auto-processed. One of the key deliverables for the project is to reduce manual processing to reduce resources and company expenses as well as improve quality. Prior to the solution, approximately 65-75% of claims identified as potential duplicates were manually worked. After the solution, roughly this same percentage of claims are now being auto-processed. That's still too much manual processing, and I am working on additional queries to identify exactly which claims are being touched and why they are being touched.
Not bad for a solid three hours of blurk.