Demographics not working sometimes

Sometimes we get an error message when clicking on the demographics section. Attached is a screenshot of the message.EMERSE-error

My SEX_CD column of Oracle’s patient table is filled in (only M and F entries). I see the same column in Solr’s patient core (with only M and F entries). I can get this to break on very small subset of people (14) and they appear to be located in my patient table.

Any ideas on where to start debugging this? I do notice my Solr patient-slave has the wrong patient count but I am not sure if the demographics sometimes pull from there.

Demographics actually always pulls from patient-slave.

Since it sounds like it’s only a problem sometimes, I’m guessing there are a few patients in the patient-slave which have mis-matching MRNs. A quick fix would be to go to the Solr admin page, click on the patient-slave core, and then go to its replication tab. Make sure its master URL is correct (should be pointing to localhost:8983/patient). Check to see if the master searching version and the slave searching version are the same. (They can be different and still green!) If they are not, click the replicate now button, and that should fix it.

EMERSE should do this for you once a day, so if it isn’t something may be wrong there. Sometimes you need to change when these run. For instance, if EMERSE tells the slave to replicate from the master, but the process of updating the master from the patient table hasn’t finished yet, the slave probably won’t end up up-to-date. (EMERSE isn’t smart enough to detect this yet; it just runs the task when it’s supposed to.) The tasks/jobs that run on a schedule are described here: http://project-emerse.org/documentation/config_guide.html?queryTerm=cron#truesolr-patient-index-replication-interval which also tells you the configuration property to change the schedule.

If this isn’t the problem, let me know!


Longer story about how you can compare the MRNs between the indexes.

So, I would look at the solr log file (in solr-VERSION/server/logs/ and pick the latest/relevant one). This will tell you the exact query EMERSE made to solr. For example: you may see:

2020-10-06 19:31:11.696 INFO  (qtp1845623216-50) [   x:patient-slave] o.a.s.c.S.Request [patient-slave]  webapp=/solr path=/select params={q=DELETED_FLAG:0&json.facet={SEX_CD:{type:terms,limit:100,missing:true,mincount:0,field:SEX_CD,sort:index},BIRTHDATE:{type:range,field:BIRTHDATE,start:NOW-130YEARS,end:NOW,gap:'%2B10YEARS'},RACE_CD:{type:terms,limit:100,missing:true,mincount:0,field:RACE_CD,sort:index},ETHNICITY_CD:{type:terms,limit:100,missing:true,mincount:0,field:ETHNICITY_CD,sort:index}}&df=MRN&fq={!join+from%3DMRN+to%3DMRN+fromIndex%3Dunified}RPT_TEXT:(words+AND+words)&rows=0&wt=javabin&version=2} hits=711 status=0 QTime=236

Look at the &fq query pamater (inside the params={...} bit). There you should see something like:

{!join+from%3DMRN+to%3DMRN+fromIndex%3Ddocuments}RPT_TEXT:(words+AND+words)

removing the escapes

{!join from=MRN to=MRN fromIndex=documents}RPT_TEXT:(words+AND+words)

The join bit says the rest of the query is against the documents index, but we are joining that to the index this request was made to (patient-slave) by identifying the MRN fields in both indexes. This is all probably correct. The part after the } is the query. Take that bit, and make a query to the documents index like so:

http://localhost:8983/solr/documents/select?q=RPT_TEXT:(words+AND+words)&fl=MRN&rows=1000

(You can do this in the admin query interface for the documents core by just filling in the fields corresponding to the query parameters.) I added the query parameter fl to only show the MRN field, and set rows to a thousand so you can see a good number of them. Since you can whittle the list of problematic patients down to 14, I would do this with the query that causes the problem with those few patients. Then, take the MRN numbers shown, and see if they are in the patient-slave index. (You can this by running the query MRN:the-mrn-number in the admin query page, but for the patient-slave core.)

It turned out to be that my slave wasn’t set up correctly. We transitioned to using an SSL certificate after the initial setup, so the original http call was returning incorrectly and Solr was throwing errors. Switching in https instead of http into the localhost link fixed our problem. The error messages were a bit misleading but knowing the demographics use the slave was a great clue. Maybe the health of the slave could one of the statuses shown inside the future admin console of EMERSE?

Thanks!