A community of 30,000 US Transcriptionist serving Medical Transcription Industry


VR must be drunk or something - me


Posted: Jul 05, 2011

Doctor said, "The patient has received 1 gram of vancomycin in the emergency room."

VR wrote, "The patient has received 1 gram of vodka in the emergency room."

Where does that come from?  It doesn't even sound like vancomycin except for the V.

 

 

sounds like the garbage I get to correct on a - daily basis

[ In Reply To ..]
.

You're lucky vodka was all it didn't hear correctly. - Some of my VR is garbage

[ In Reply To ..]
every other word. Takes forever to edit.

VR error - Snow Bunny

[ In Reply To ..]
It's not drunk ... the speech engine is just not getting trained correctly.

That's why you edit VR verbatim except for correcting grammatical (is/was) and punctuation errors as well as following the a minimal number of BOS rules.

The rules which apply to keyboard transcription is different than VR editing. When you're transcribing the documents, then you have more freedom to edit. When you're editing VR dictation, stick to verbatim as much as possible. That way the speech engine can learn to recognize things.

Been correcting ASR verbatim for 4 or 5 years - now. It is still dumb as a

[ In Reply To ..]
rock. Never, ever "learns" anything and my pay is still down by 50%.

Correcting is not training - Snow Bunny

[ In Reply To ..]
What I suspect is happening on your end is the doctors dictate, the engines transcribe, you fix the errors, the reports go back, and that's the end of it. --- If I'm correct, then the speech engine will never be trained.

The facility has to take the "corrected" document and run it back through the speech engine. That's the only way it will learn that when the doctor is doing surgery and you get "2 2-0 nylon sutures" that you're correcting it to "two 2-0 nylon sutures" will appear that way.

So the fault lies not with the speech engine but with the people using the product.

I use speech recognition on a daily basis re-dictate the reports, and have been doing so fairly steadily since 2004 ... so I know how it operates.
I suspect you're right. The MTSO isn't going to take the time to - train the speech engine
[ In Reply To ..]
to each physician's voice. Hence, the need to spend as much time on a document doing VR as one would just typing. If the pay were up to par with the amount of time being spent on VR, then I'm sure most of the people doing VR would not have a complaint.

The engine will never learn - sm

[ In Reply To ..]
under the circumstances that MOST VR editors are working. Most work for extremely large MTSOs. That means they have hundrends of facilities, thousands of dictators each with their own accents, dialects, unorganized, scattered thoughts. None of these dictators speaks in complete sentences using perfect grammar. They are totally unaware that they are dictating into a voice recognition program. The MTSOs specifically do not want them to know so they will NOT modify in any way their speech patterns.

What you can produce with your echo dictation using your own personal speech engine is irrelevant. The more dictators the engine has to work with, the more confused it becomes and no amount of editing will be successful in "training" it.

then perhaps you can explain - Snow Bunny

[ In Reply To ..]
How I'm bouncing back and forth between about 10 different facilities, some with as many as 50 physicians who dictate on a regularly basis, but speech engine is doing a fairly decent job typing what they say accurately.

The dictators don't have to modify their speech patterns for the product to transcribe what they say correctly. You can putz around with the speech engine all you way. Let's say your name is Betty Boop. I can easily train the speech engine to type that when I say "Boop Boop Be Doop."

And the speech engine cannot get "confused" 'cause there's no thought process involved. It doesn't know the difference between "ice cream" and "I scream."

Look ... there will be debates about the topic from now until the cows come home from pasture. You like it or hate it. The company I work for uses Nuance, which is the company for the speech engine (I believe it merged with Dictaphone).

I suppose the bottom line is ... if you don't like it, then work for a company that doesn't use it, ask to be given first consideration for any accounts which are non-speech, get your own accounts, or work in an office. 'Cause all the kvetching in the world ain't gonna make it better.
I'm not working at a company where I can say "Boop Boop Be Doop" - all of my work
[ In Reply To ..]
is being dictated by physicians, who apparently are not training the speech engine. All the information I have researched on VR indicates that the speech engine has to be trained to the voice, but as stated, my voice isn't being put into the system, so I am very perplexed that your voice is.
How are you doing echo dictation - sm
[ In Reply To ..]
using your own version of Dragon AND working for a national MTSO?
echo dictation - Snow Bunny
[ In Reply To ..]
My working for a national MTSO has no bearing on the issue. Dragon is simply another input device. There's no difference between your typing "tpia" to get "The patient is a" and my saying "the patient is a" and getting the words to appear across the screen.

Or do I not understand your question?
Did not ASK for YOUR - sm
[ In Reply To ..]
advice on what I should do, where I should work, etc. I am saying, working for an MTSO, I am not able to echo dictate anything, ever. I am not aware of any large MTSO where this method is allowed. Therefore, I can only edit. Not one physician who dictates into the system utilized by the MTSO I work for is "training the system" It just DOES NOT WORK THAT WAY. If they each had their own version of VR software and ONLY THEY dictated into that version, then perhaps they could train that version to their voice. This is not the case, obviously. Again, what you are doing with your own personal echo dictation is irrelevant to what is going on with a large MTSO's VR/SR
not it's not irrelevant - Snow Bunny
[ In Reply To ..]
I work for a MTSO and have about 60% speech to 40% non-speech. I would rate the accuracy of the speech dictation around 95% ... the biggest errors are in the punctuation. It's the non-speech dictation that I echo-dictate.

And believe me, some of the dictators I've done don't speaka good English, but the speech engine is able to transcribe what they say quite well.


Similar Messages:


Drunk Already...
Apr 17, 2013

I cant keep up with the shot glass anymore!  4 shots = goodnight Irene. ...


Drank Vs Drunk (which Is Correct)
Jul 24, 2013

The patient drank too much.  The patient drunk too much. ...