Emergency Exam E.A.G. Report

Release of the College Investigation into ‘Alleged Exam Prejudice’

The Australasian College of Emergency Medicine (ACEM) published the findings of the Expert Advisory Group (EAG) report yesterday.

While the process has taken some time, the terms of reference and externally commissioned reports suggest an attempt at a robust investigation.  A media release summarised the findings of Dr Helen Szoke and Professor Ron Paterson’s investigation.  An executive summary found “no specific evidence of discrimination in the outcome of the 2016.2 Fellowship Objective Structured Clinical Examination (OSCE) (sic)“, but identified many areas for future change and “improvement“.

The ACEM Board have “accepted the EAG’s report in its entirety” and “will respond in time to each of the recommendations“. The college have acknowledged a number of areas for improvement and appealed for help with implementation of the findings.

Future monitoring of the situation will require not only transparency of the exam process but also more detailed reporting on performance of potential ‘minorities’ such as indigenous Australians, Torres Strait Islanders, LGQBTI, genders, educational backgrounds, ages, ethnicities and the like.  (Comments below will be open for 10 days.  Please show respect for all stakeholders and follow the posting “rules” outlined in previous post(s) on this issue)

RSS icon

Relevant Downloads


15 thoughts on “Emergency Exam E.A.G. Report

  1. MODERATED TITLE
    https://emergencypedia.com/2017/10/26/amc-social-media-policy-in-relation-to-blog-comments/

    THIS POST MAY BE DEEMED OFFENSIVE UNDER THE AMC SOCIAL MEDIA POLICY CITED ABOVE

    First of all would like to commend the efforts of EAG for all these months.they have worked tirelessly to prove that there was no discrimination occurred even with a difference of 80% in results between the two groups.
    We cannot blame the EAG for doing what they were supposed to do.They were hired by the ACEM and were an “INDEPENDENT GROUP” even though most of the members were from ACEM.ACEM paid for the investigation and got the results that they wanted.How can they go against ACEM? They had to make college look good in this fiasco.

    Lets go through some of the points in the report

    3.15.1 there is no direct evidence to support bias against Group B candidates. Combining this with the assumption that, ‘during the station construction process these issues have been addressed as well and the stations have been scrutinised for any such possible bias’, Professor Schuwirth concluded that the difference observed is a true difference in performance.

    No direct evidence but we will just assume that the college addressed the issues of bias – the complaint was that the college did not address these issues. The Analyst ASSumed that the college has addressed these issues adequately. So the only possible explanation was that these non whites were stupid enough to attempt the exam.

    3.19 While the psychometric analysis has demonstrated that examiner bias did not have a significantly statistically identifiable effect on the 2016.2 OSCE, individual examiner bias could not be ruled out completely. The EAG has identified a number of factors that may have had an impact on the examination outcomes. As scoring of candidates performance involves independent expert judgement, it is potentially open to subjectivity and conscious or unconscious bias. The absence of documented marking criteria for candidate performance for ‘minimum level of competence displayed’ for marking Fellowship OSCE domains such as communication, professionalism, scholarship and teaching, and leadership and management can lead to greater individual subjectivity in the evaluation of these domains and interpretation of a candidate’s performance as ‘just at standard’. Others factors may also have played a part: relatively low examiner diversity (e.g. in the 2016.2 OSCE 93% of examiners were from Group A countries of primary medical qualification and 7% were from Group B countries of primary medical qualification), sub-optimal discussion of cultural sensitivity in calibration of OSCE assessment criteria, and the fact that raters’ cognition is predominantly influenced by their own experiences, values and interests. Taken together, all these factors may have resulted in judgements giving rise to differential and potentially unfair outcomes for Group B candidates.

    Taken together all these things mean that these candidates were discriminated due to the subjective assessment of examiners rather than objective. Why are these unfair outcomes only for group B candidates and not for others? This is what the complaint called racial discrimination because you belonged to non white group and all these factors only applied to non white group. What else do you want to call this – upliftment of non white group ?

    4.8.2 Submissions received indicate there is a perception among DEMTs that candidates believe that if they are appointed as an ED Registrar, they are capable of working as a FACEM and therefore automatically able to pass the Fellowship examinations.

    The EAG does not even understand what the training process is. This perception among DEMTs is wrong as trainees become ED registrars once they pass primary exam.No one thinks they would be able to work at the level of FACEM as soon as they are appointed as ED registrar. Its only after they worked at that level for few years that you would think they are at the level of FACEM which they should be if trained properly.if they are not upto that level even after working 4 -6 years ,whose fault is it?You should look at those DEMTs who have not trained them to get to that level.

    Same mistake as they did in interim report – admission to training did not guarantee you a spot in OSCE exam, you have to pass multiple hurdles to get to the OSCE exam.

    4.8.8 Another FACEM sought feedback for a specific trainee and was told that the trainee’s performance “might have been good enough in India but was not good enough here”. Such experiences are not limited to these examples.

    This response from the Exam committee to a DEMT shows how biased they are towards these trainees.”Good enough for INDIA”. So definitely the exam committee is already thinking that IMGs from India are not good enough to work in Australia. Pretty good feedback from a specialist Exam committee. Exactly the sort of feedback DEMT was expecting.DEMt will then apply to all his Indian trainees and deem them not suitable enough.No surprise they just keep failing.

    In this context we can also remember the rant of a Senior consultant in The Australian who said “Indians are not even fit enough to be registered as Doctors”.

    This is definitely not racial discrimination ,it is just that exam committee doesn’t believe in Indian doctors to be good enough to be consultants even though they trained at the same level with other white trainees.

    4.9.2 Examiners were also concerned about the matters raised in the interim report about examiner selection criteria. The selection criteria necessarily require prospective examiners to demonstrate their capacity to examine trainees. The pool of prospective examiners is limited to those Fellows who already participate in College activities. Of note is one examiner’s concerns that if a requirement of cultural diversity within the Court of Examiners was imposed, advice would be required on how this requirement can be implemented without introducing another form of discrimination of one cultural group being preferred over another.

    Yes thats right. The culturally diverse group will create problems. Lets keep it all white and there should be no problems.

    5.8.4 The detailed analyses by candidate ethnicity show that although white candidates out-performed BME candidates, the differences were largely mirrored across the two different sets of examinations. Although the reason for the differential performance is unclear, the authors concluded that:

    “… the similarity of the effects in independent knowledge and clinical examinations in different specialty colleges suggests the differences are unlikely to result from specific features of either assessment and most likely represent true differences in ability.” (p.1).

    Yes the non whites are inherently stupid and cannot compete against white man.So his abilities are lower than a white person anyway so he should be told to stay at his level and not aspire for higher abilities. It is already proven in research that the BME candidates are inferior to whites ,hence it proves that here is no racial discrimination even when there is 80% difference in results between two groups.

    6.21 That is not to say it is not occurring at all in the ACEM Fellowship OSCE. The College’s approach to standard setting and use of the Borderline Regression Method to determine the cut-score and pass mark required for each OSCE is appropriate and consistent with international use. However, the effectiveness of this approach relies upon robust standard setting for each domain/station through effective calibration of examiners to achieve a consistent and fair understanding of how to judge when a candidate’s performance is ‘just at standard’ for that station.

    It is occuring and thats what occurred in the OSCE exam.The examiners were given free rein as to what they can do rather than keeping the exam objective.the examiners went about culling all those people who did not match their perception of color,accent,and race. Objectivity depends on definite marking system and reproducibility of results.ACEM even declined to video record the exam citing some silly reasons even after the candidates signed the confidentiality form.

    6.22 The EAG’s opinion is that the College’s management of the OSCE to date potentially enables systemic discrimination to manifest in some elements of the examination.

    On one hand EAG loudly proclaims no discrimination occurred and then makes multiple statements as to how it could have occurred but cannot explain the difference in the pass rates.

    7.4.7 The College has conceded that inadequate supervision and training of candidates in the workplace has possibly led to candidates who were not ready and/or not competent attempting the examination or continuing in the training program. The College advised that former ITAs conducted at the end of term appeared of questionable utility and ability to identify trainees who needed extra guidance and training in order to progress to a standard of practice at a specialist level.

    I cannot understand why suddenly overnight 90% of non white trainees became unsuitable .the training was same throughout for both white and non white trainees . Even exam preparation was done together by both groups and even attended same courses. But on the day of exam the non whites suddenly became stupid and unable to articulate in the exam- almost 90% of the non white cohort.
    Unless the college and DEMTs are running a seperate training programme for whites where they are taught separately as to how to become a consultant this is not possible.
    Inadequate training and supervision is now the fault of trainees.EAG and college are trying to say that the training was shit but as with everything else its the trainees’ fault. The training was good and as far as I know most of the DEMTs work very hard to get you upto the level and will definitely tell you whats wrong with you. Most of the DEMTs didn’t have a clue why their trainees failed and the racist comments from the examination committee didn’t make things easier.(ex. good enough for India)

    7.4.11 That there is a true difference in performance based upon the source of a candidate’s primary medical degree; that differences in their medical training can result in some candidates not being up to the clinical standard required due to non-comparable training methods and assessments.

    I agree it is a possibility in some cases, but 90% of the non white trainees not being upto the clinical standard? This puts serious concerns towards ACEM ’s suitability to be a training body.

    EAG cannot be considered an independent body as it was filled with ACEM members.ACEM investigated itself and gave a clean chit to itself.

    Why apologise if no if no discrimination was done?
    Why refund the exam fees if they were not at fault?

    EAG has done very well in what they were told to do. To clear ACEM of the racial discrimination complaint and portray them as a benevolent institution trying to help inherently stupid non white trainees.

  2. MODERATED SECTION

    https://emergencypedia.com/2017/10/26/amc-social-media-policy-in-relation-to-blog-comments/

    THIS POST MAY BE DEEMED OFFENSIVE UNDER THE AMC SOCIAL MEDIA POLICY CITED ABOVE

    But tell that in the beginning when IMGs enroll in the college. Before they start their training in EM. Before they waste their time sitting on a table studying for countless hours on their days off, before starting a shift and after coming back home finished a shift in ED. During this time we don’t go out, we don’t have a social life, no leisure activity. We could have spent that time with our families and loved ones.
    What is ACEM going to do about those FACEMs who played with trainees lives. The stress all these processes had done to trainees lives, stress caused psychologicaly and financially? Is it going to let them continue to destroy the college (ACEM) just to bring forward ideas of those few FACEMs?

  3. I fundamentally do not agree that there is a conspiracy. While prejudice is an inherent problem for humanity today, there is in my view no evidence, or indeed likelihood of ‘white’ (them) versus ‘other’ (us) plan for destruction. I can only reference my own experience of being mentored and taught by gay, straight, male, female, white, black and brown (and other) humans in the diverse world of Emergency Medicine. So of course you are entitled to your opinion – and I am sorry that the experience has been life changing for the worse from the account that you recall. But – for me Emergency Medicine so far has led to stress, but also a strong sense of career and work satisfaction, plenty of downtime (even when doing exams), and usually enough rest to off set the hard work.

    1. I agree with what you say Andrew. But you didn’t have to go through what these non White trainees went through . If you had had the same experience you would have been singing a different tune. All these trainees were expecting to have the same experience as you had. They thought it was a level field. But they were not aware that they had this inbuilt weakness and would not be able to perform at the same level.We were also taught by the multitude of people that you listed above and the same people didn’t have any clue as to why their trainees were failng. And now suddenly the report blames the trainees and states their training was inadequate.
      How do you explain that?
      I cannot blame any of my DEMTs or consultants for substandard training and I believe that is the case with most of DEMTs. How did this suddenly become an issue now?
      How many DEMTs have come forward and stated that their trainees were unsuitable or were not ready to sit the exam?
      How many of those 100+ trainees do you think were not ready? If you are not ready after multiple years of training who is to blame?
      EAG sites many reasons some of which are not believable and ouright silly if you consider these trainees worked in the speciality for more than 6 years and still cannot be at the same level of their white counterparts.

      If there was no conspiracy before there is one now to save the ACEM from these complaints.

  4. Thanks Andrew
    I thought you would have given up getting into trouble by now.Good on you mate to stand up to the bullies.
    EAG report-
    Wow what a turnaround from Interim to Final report.
    How come all those factors EAG Is quoting affected only 1 group whereas other group’s pass rate continues to be 90%.
    So EAG is now saying that the differences in pass rates shown in the graph they prepared is purely due to performance. I wonder why this exceptional performance does not reflect on the floor in SOME cases.
    I for a minute do not doubt the integrity of the EAG but I disagree with their findings.

  5. Its because Andrew has not given up we have a forum where we can talk about this. Nowhere else would we get any opportunity to talk about it. Hats off and many many thanks to you Andrew.

  6. MODERATED (delayed action due to public complaint)

    Please see AMC social medial policy (reference below)

    This comment is potentially offensive – suggested ‘corruption’ within FACEM
    This is unfounded

    Please note that you are medical professionals and breaching the AMC social media police

    Here is a summary:

    Troubleshooting: Have you ever … ?
    • Googled yourself? Search for your full name in Google, particularly ‘Australian
    Sites Only’ and ‘New Zealand Sites Only’. Do you feel comfortable with the
    results that are shown?
    • Posted information about a patient or person from your workplace on
    Facebook? Have a look through your old online posts and blogs;
    • Added patients as friends on Facebook or MySpace?
    • Added people from your workplace as friends?
    • Made a public comment online that could be considered offensive?
    • Become a member or fan of any group that might be considered racist, sexist,
    or otherwise derogatory? Browse through all the groups that you have joined
    and consider whether these are an accurate reflection of the person you are,
    and the values that you hold.
    • Put up photos or videos of yourself online that you would not want your
    patients, employers or people from your workplace to see?
    • Checked your privacy settings on Facebook or MySpace?
    • Felt that a friend has posted information online that may result in negative
    consequences for them? Did you let them know?

    Here is a link:
    https://ama.com.au/system/tdf/documents/Social_Media_and_the_Medical_Profession_FINAL_with_links_0.pdf?file=1&type=node&id=35198

  7. Blunt Dissection 4.10 !

    I thank college and the EAG for the work but there have done. The effort and time are much appreciated. Here comes the moment for the 94 % to think very carefully about what the future in emergency medicine holds for them. Point blank they will not be consultants, unless off course significant changes occur to the process of training and assessment, among many other things that came up in the EAG report. Change only happens when there’s an honest recognition of flaws. Remember 3 attempts only. You guys have 3 attempts at the exam, the college has unlimited time to refine its process. There will be collateral damage, it could be you, but like another improvement for the grater good we have to tolerate collateral damage. Our acceptance of drone strikes is the best example of our comfort with collateral damage.

    I do not believe that the EAG is a court of arbitration between trainees and examiners or indeed the college, its not meant to be. The report is very useful document, if taken in the letter and spirit will ultimately help the medical profession serve the society better. Lets go through the key points of the report. The various parts the report give out recommendations initially on the complaints and then on for process improvement. Like in every document, somethings are said, some unsaid, some understood, some implied and every bit can always be utilized by either side to suite convenience or protective propaganda.

    At the very outset I should put on record I do not think the FACEM’s and DEMT’s are a homogeneous group, nor are the trainees. I am sure majority of the FACEM’s are truly committed to the field of emergency medicine, patient care, teaching and training and trainee welfare. The trainees recognize the limitations of the FACEM’s and DEMT’s in terms of their ability to deal with the college hierarchy. Every one has a family, needs a job, bills to pay and life to live. Trainees understand that FACEM’s and DEMT’s may not have a choice but to play along with a deficient system, trainees I am sure note the hard work put in by the supervisor group to make the most out of a flawed program. I am sure every trainee would have had an occasion when they asked their DEMT’s or FACEM’s uncomfortable questions about the training program and sensed the helplessness in the replies of their supervisors that went along the lines of “no one knows, what’s going”, “things are getting better”. Its very unfortunate that we have come to a stage where an attempt has been made at creating a deep division between trainees and FACEM’s. The college unfortunately failed to demonstrate unifying leadership. In the quest to shift blame, abdicate responsibility and partially own up to delivering substandard training, coupled by making excuses and whistle blower shaming the college set the stage for further mistrust.

    The Bible, Quran, Bhagwat Geetha are sacrosanct texts as is the curriculum framework document of the college. You don’t question them; ultimately it will lead you to the path to higher life weather in medicine or spirituality. You have to depend on the clergy to walk you down this path, while trusting that their interpretation of the texts is flawless. The very foundations of your faith begin to crumble, life begins to fall apart and you fell let down when the temple confesses to lack of utility of religious rituals, lack of knowledge in the clergy group and that the sacrifices you made in the hope of “higher” attainment are useless, because the clergy and testing mechanism are deficient.

    The good that has come off the ACEM training is 4.10; evidence of completion is proof of competency in research and analysis. 94 % may be incompetent at being specialists but 100 % are research complete. With my 4.10 hat I shall indulge.

    Time to accept the first premise that absence of evidence is evidence of absence. I can stop here without going through the report, but that is injustice to the report.

    I quote from the report to provide a background for further analysis of the report. There are unfortunately some typographical errors in the report, more so in the areas where numerical data is quoted, especially when cohorts do not add to 100 % it makes my interpretation of conclusions a bit difficult. Lets try and analyze. At the very outset it’s important to state that the narrative has changed from an enquiry into discrimination to suggestions of improving a botched up training system.

    5.2 It is evident that the change in the Fellowship Clinical Examination process and the introduction of the OSCE in 2015 has had a disproportionate impact on Group B candidates, reflected in their considerably lower pass rate.

    What does this mean? it is the fault of the 94 % that failed. Let me explain, the examination process from 2015 is significantly better than the prior process and is being improved as we speak, the incompetent 94 % are being weeded out, this is a part of the narrative, you will get to hear this more often, more vehemently and like everything else when people keep repeating a statement it will become the truth, remember Trump! Is this the case? Does the EAG say so? Does that mean prior to 2015, incompetent doctors were becoming specialists? – Irrelevant questions.

    5.1 The complaint alleged that only 6.8% of NCCs passed the 2016.2 OSCE compared with 88% of CCs. Data provided to the EAG (refer Tables 4 – 6 and Figure 2, pp.34 – 36) and produced in the College’s Reaccreditation Submission to the Australian Medical Council and the Medical Council of New Zealand indicates that from the time of the introduction of these changes to the Fellowship examination format in 2015, a marked disparity in the pass rates between Group A and Group B candidates has occurred.12 Up until 2015, the data shows some disparity in the pass rates between these groups, however, the disparity in pass rates became markedly greater following the introduction of the new examination format.

    5.7 The literature discussed in the Farmer review highlights that findings of differential attainment cannot be dismissed as atypical or local to any one country or specialty examination. However, the College’s experience of such a marked disparity in results following the introduction of the OSCE examination format in 2015 does appear anomalous compared with other reported experiences.

    At this stage the logical first question is “what happened in 2015, that led to the unprecedented magnitude of disparity and why is this magnitude of disparity unique to ACEM? Some disparity will always exist, but year 2015, new system, magnitude of disparity and uniqueness of this to ACEM, are the key issues. What are the other reported experiences ?

    The college provides a series of explanations by which the 94 % could be judged incompetent based on exam performance, which I believe came into relevance only in 2015 and did not exist prior:
    1. That prior to 2015, no speculative attempts at the examination were made.
    2. The country of primary medical degree started to effect performance with staggering significance only from 2015 and exclusively to the specialty of emergency medicine.
    3. More people fell into the borderline cohort of competence from 2015.

    Now that the above is accepted lets move on. Firstly lets summaries (OSCE training) at this stage there is no evidence of significant bias, nothing happened in 2015 except that 94 % were simply incompetent and though the magnitude of disparity is unique to ACEM, its again explained by the premise that 94 % are simply incompetent.

    Further exploration

    6.20 It is apparent from Professor Schuwirth’s analysis and Professor Farmer’s literature review that for unconscious bias to have a meaningful effect on candidates outcomes in OSCEs and to be a significant contributor to the persistent differences in outcomes between Groups A and B, it would need to be widespread and systematic amongst examiners.

    6.21 That is not to say it is not occurring at all in the ACEM Fellowship OSCE. The College’s approach to standard setting and use of the Borderline Regression Method to determine the cut-score and pass mark required for each OSCE is appropriate and consistent with international use. However, the effectiveness of this approach relies upon robust standard setting for each domain/station through effective calibration of examiners to achieve a consistent and fair understanding of how to judge when a candidate’s performance is ‘just at standard’ for that station.

    6.22 The EAG’s opinion is that the College’s management of the OSCE to date potentially enables systemic discrimination to manifest in some elements of the examination.

    What does the above mean, let me try and understand,

    1. For significant disparity to occur, bias should be widespread.
    2. There is a internationally recognized method used to determine cut off score.
    3. The quality of OSCE stations and marking system is questionable (as per the EAG).

    Irrespective 94 % of a cohort is incompetent.

    Its too soon to check your understanding, by frequent checking I may loose marks in the OSCE for assuming that you are rote learners like the IMG’s.

    Lets move on

    6.23 Some domains are inherently subjective in nature (e.g. communication, leadership and management, scholarship and teaching) and where there is no defined criteria for assessing performance, and insufficient calibration of a station, a candidate’s performance in these domains is typically assessed by reference to an individual examiner’s experiences and opinions of what the requisite standard of performance is for a consultant. Given the demographics and experience of examiners, this is likely to be informed by reference to dominant cultural values and norms expected in the practice of Emergency Medicine in Australia and New Zealand.
    6.24 These elements may have disadvantaged trainees who are not native English speakers and/or have obtained their primary medical degree from another country either culturally different to Australia or from a non-comparable health care system. However, the EAG has not been able to quantify the impact of these elements.

    What does this mean? At this stage I would like to explore the concepts and methods of testing of the above-mentioned domains,

    Communication skills and proof of practice: after practicing for a minimum of six years only 6% of the people of a particular group are capable of communicating at the level expected of a specialist, this is very concerning. 94 % of this cohort had a problem all through the years with making referrals, professional interactions with peers, handovers, communicating with nursing, allied health, Etc. and the deficiency has not been picked up in the course of training, either at direct interactions or by complaint. This can be explained the tolerance of the society and benevolence of the training scheme.

    Managerial skills: Lets accept that the training scheme includes a robust managerial coaching program, 100 % of the trainees go through this process. But due to cultural and social differences 94 % of cohort is simply not competent to be managers and leaders in Australia. Lets explore this further, reading chapters in a textbook and regurgitating key words in an exam is not rota learning, its proof of managerial skill which people attain on the day the results the fellowship exam are released no prior experience or exposure is required. Leaders are born, cannot be made. Socio cultural aspects lead to poor managerial skills, inability of critical though, poor communication and lack of leadership, lets explore this premise in the context other OECD countries and industries, there is a disproportionately high number of people of Indian and Chinese origin CEO’s heading large global congratulates, including Satya CEO of microsoft and Sunder CEO of google. “They may be good enough for the rest of the world, not for..…”.

    Is it time to recheck your understanding yet? May be, may be not. Gamble effect at OSCE.

    Next feedback process

    4.7.8 Such perceptions have been affirmed by DEMTs and FACEMs who have been unable to provide examination candidates with specific guidance on how they ought to prepare themselves for the OSCE; in some cases this is due to a candidate failing the OSCE whom the DEMT believed was at the required standard to succeed.

    4.8.6 FACEMs and DEMTs also submitted that while formal feedback may not have been given in response to ITAs and WBAs, informal feedback had been given (at least to some trainees) and this might not have been adequately recorded.

    4.8.7 Candidates’ overall preparedness is frustrated by a lack of specific feedback for those who fail one or more OSCE. A lack of feedback means that FACEMs and DEMTs are unable to assist their trainees to adequately prepare to retake the examinations.
    4.8.8 Trainees and DEMTs have found it extremely difficult to obtain feedback from the College following Fellowship Examinations. A specific example was given when a number of trainees failed a station where they were required to read an ECG. The trainees received no feedback and asked for assistance from their DEMT. The DEMT sought feedback on their behalf and was told that the information was confidential and could not be disclosed. Another FACEM sought feedback for a specific trainee and was told that the trainee’s performance “might have been good enough in India but was not good enough here”. Such experiences are not limited to these examples.

    What does this mean? Those responsible for training and supervision either do not have the necessary tools to coach and supervisors do not skills to assess the supervised. 4.8.6 is interesting, ITA’s and WBA’s do not serve the purpose of feedback, While DEMTS and FACEM’s fail in their duty to provide feedback, its the trainees are to blame for failure, the as per the next paragraph.

    Next

    Adequacy of preparation and training

    4.8.1 The general perception among FACEMs and DEMTs is that limited guidance on the process, content and marking for the OSCE being made available to the candidates prior to the OSCE impacts on their preparedness.

    4.8.2 Submissions received indicate there is a perception among DEMTs that candidates believe that if they are appointed as an ED Registrar, they are capable of working as a FACEM and therefore automatically able to pass the Fellowship examinations.

    4.8.3 DEMTs are concerned that trainees do not understand that FACEMs need a higher order of clinical, communication and administrative skills to be able to manage life and death decisions rapidly that ED Registrars may not be equipped with.

    What does this mean, FACEM’s and DEMT’s do not have information about the examination process. Trainer does not know the rules of the game, which is absurd. The supervisors do not understand the process but they are concerned the trainees do know what it takes to be a FACEM. Blind leading the blind folded.

    The next two statements 4.8.2 and 4.8.3 are frankly insulting, I would not like to credit the statements with a response, but have put on record that the statements are childish, immature and blatant lies. I don’t think majority of the FACEM’s hold this view. I would urge FACEM’s who sleep at home with the confidence that their registrars are taking critical life changing decisions and managing departments night after night to call out the hypocrisy of these statements. They owe it to their registrars and the society.

    Move on – training program

    6.32 The College undertook a limited data comparison to determine any correlation between WBA and OSCE results. It was noted that broadly there is a correlation between some outcomes of the WBAs and OSCEs, in that similar performance is noted in some aspects across both assessments from a broad pool of candidates.

    What does this mean? “Limited date comparison” “Some” correlation between “some” outcomes. Seems very robust scientific methodology. End of story. WBA’s are good when they correlate with OSCE outcome when they don’t, its because the incompetent 94 % and getting away with their WBA’s because of the kindness of the assessor or pressures of workforce.

    6.33 It is possible that inadequate supervision and training of candidates in the workplace has led to candidates who were not ready and/or not competent sitting the examination or continuing in the training program. The College in response to the interim report admitted this. The College advised that former ITAs conducted at the end of term appeared of questionable utility and ability to identify trainees who needed extra guidance and training in order to progress to a standard of practice at a specialist level. A review of assessments undertaken during training is underway under the auspices of the Council of Education.

    What does this mean? 94 % of you guys were simply not consultant material. Doing WBA’s, obtaining feedback and ITA’s mean nothing. Your argument that your WBA’s and ITA’s are good, now does not hold water, the college accepted that they are of questionable value. So don’t quote your WBA or ITA’s. If they are bad you get a remediation or get kicked out of training, however if they are good they don’t mean a thing.

    Lets move.

    Examination process:

    4.9.5 The process by which the OSCE was introduced was flawed as it was poorly communicated and candidates were provided with little warning of the change. This has been further complicated by wide variations in cut scores, an inaccurate method of assessing specialist emergency medicine practice knowledge, very high pass rates for the 2015.1 Written Examination across all candidate pools, removal of feedback for unsuccessful candidates and removal of specific information about station content. The new style of examination also saw a change in the weighting of specific domains over others, so for instance in the 2015.1 OSCE, communication accounted for 30% of a candidate’s overall mark as it was tested in each station. However, in previous examinations it had only accounted for 5% of a candidate’s overall mark. In one examiner’s submission, this more than any other factor has resulted in a marked change in pass rates for Group B candidates.

    4.9.6 In addition to the changes in the oral examination, examiners have pointed to substantial changes to the format of the Written Examination. These changes have resulted in a less analytical assessment of deeper knowledge than was found in the previous written examination. The data has demonstrated that the percentage of Group B candidates passing the Written Examination has remained largely unchanged despite the introduction of the OSCE, with the exception of the Group B cohort who sat the 2015.1 Fellowship Written Examination where an 80% pass rate was achieved. Those candidates who would have failed that part of the examination in the past are progressing to the oral examination and then failing.

    4.9.7 Examiners submit that the change to an OSCE meant that the focus changed to a socio-linguistic performance test with no ability for the examiners to explore issues arising during the stations, due to limited if any examiner-to-candidate interaction. The change from an examiner led question and answer style of oral examination to a minimally interactive examiner style examination had a significant impact. This has also affected an examiner’s ability to assist candidates to demonstrate their actual knowledge.

    4.9.8 It is broadly submitted that issues related to the examination process and change in examination style greatly impacted examination candidates, and, in particular, the ability of Group B candidates to pass.

    What does this mean? Precisely what it says. The entire process of examination and assessment is deficient as per the report. The logical question now, how can we have faith that this process is ensuring that those who pass have the “specialist” level knowledge and skills and those who fail don’t? Clearly we cannot have that confidence. What does the result of the written exam, 2015.1 represent?

    Findings: the most interesting part. All of you can read from section 7 of the EAG report, I want to touch upon what I consider important.

    7.4.6 Some submitters indicated that the current WBA process may be flawed due to the pressures of workforce resulting in assessors being reluctant to give poor WBA scores lest it jeopardize their staffing given the potential for poor performance in a WBA to trigger remediation, thereby essentially rubber stamping trainees who are not clinically competent. It is possible this reluctance to accurately score WBAs may result in candidates believing they are ready to attempt the OSCE in circumstances when they are not. The EAG notes however that the WBA does not filter candidates for the examination as there is no requirement for DEMTs/Supervisors to confirm trainees’ preparedness to undertake the examination. The EAG considers that the role of WBAs must be looked at in the context of future continuous improvement undertaken by the College and their impact on the OSCE.

    7.4.7 The College has conceded that inadequate supervision and training of candidates in the workplace has possibly led to candidates who were not ready and/or not competent attempting the examination or continuing in the training program. The College advised that former ITAs conducted at the end of term appeared of questionable utility and ability to identify trainees who needed extra guidance and training in order to progress to a standard of practice at a specialist level.

    What does this mean, the college says ITA’s and WBA’s are not good enough to determine your competence, but remediation cannot be questioned. Bad ITA is bad, good ITA, has no value. The college and EAG seem to say that very pillars of the training program are faulty. And no aspect of training is linked to the examination or the outcomes. I don’t even know now what to make of this! So training has nothing to do with examinations, no wonder preparation for examination needs exam centered courses, one would logically expect training makes trainees competent and that is demonstrated at examinations. “Rubber stamping trainees who are not clinically competent patient safety? So did hospitals collude with the college to put patients at risk, by tolerating incompetent trainees? If so the society and consumers have a right to know this.

    7.4.10 Deficiencies in the examination process (such as lack of ‘at standard’ criteria setting for the domains of communication, leadership and management, and scholarship and teaching, and suboptimal calibration of marking criteria), coupled with comparative low examiner diversity can give rise to the risk of subjectivity and culturally laden assessments of a standard, which may disadvantage a culturally diverse candidate group.

    What does this mean; the OSCE system has a significant deficiency (substandard could be an adjective), which can affect outcomes. The examination system is substandard however those who passed have “higher level thinking” and 94 % of you are incompetent.

    7.4.11 That there is a true difference in performance based upon the source of a candidate’s primary medical degree; that differences in their medical training can result in some candidates not being up to the clinical standard required due to non-comparable training methods and assessments.

    What does this mean, there is a difference in performance in both the groups. That’s obvious, it’s difficult for someone from Australia to practice china, India or Pakistan, converse holds true. The issue has always been magnitude, initiating year 2015, only ACEM. No one in the right mind would expect complete equalization. Just not possible.

    Recommendations: Lets go though some of them.

    8.6 To address the impact of these issues, the EAG recommends that the College consider:

    8.6.1 Reviewing the requirements and selection criteria (already underway) for entry into the FACEM Training Program.

    8.6.2 Consider what support or alternative options to an award of Fellowship could be made available to trainees coming to the end of their training term and who are unlikely to satisfactorily complete their training or demonstrate they are at the standard required to become a FACEM.

    What does this mean? Introduce selection process (50 shades of grey) offer alternate qualification. In the big picture, poor training process, questionable examination system, absent feedback mechanism solution filter trainees at entry or take them off training it will fix the problem. Silencing!

    Bottom line:

    3.18 The EAG considers that the introduction of the Fellowship OSCE in 2015 has had the unintended consequence of giving rise to systemic racial discrimination.

    The report is great document and we need to respect it, it’s understandable that it’s a carefully worded piece, that’s trying to strike a balance multiple issues.

    Based on the EAG report and ACEM’s admission the roll out of 2015, examination process is suboptimal, bias cannot be ruled, out, supervision at training may be inadequate, FACEM’s and DEMT’s were not aware of the requirements for the examination, there no objective feedback, WBA’s an ITA’s are of questionable value, there are issues with the written examination, multiple components from station calibration, examiner variation, non standardized evaluation of the OSCE exam can disadvantage people. So what exactly is right with the entire process? From the EAG report – nothing, every component of the training is substandard. It’s deeply disturbing to know that at least 4 years of advanced training was suboptimal.

    However flawed the system may be, 94 % of you are incompetent, so to improve system, the college suggests limiting your attempts, removing you from training and offering alternative qualifications, while selecting people that can demonstrate competence in a system that the college now says is flawed.

    A concerning fact is that it took a EAG to bring these issues to light, not the FACEM’s not the DEMT’s not the medical leaders. Did they not recognize or just remain silent ?

    The report unfortunately fails to answer in one specific question, why did the pass percentage of IMG’s fall dramatically from 2015? Why was the magnitude of disparity so high? Why is this extent of disparity exclusive to ACEM ? The question was not “does disparity exist or not” the question was why so large and so suddenly? Trainee shaming is not a solution.

    The report raises many questions:

    The most significant question is, why did the college confess to a poor training process, inadequate supervision, irrelevance of training exercises (ITA and WBA’s), while denying willful discrimination vehemently. Not difficult to understand. The fall out from the proof of willful discrimination is phenomenally larger and more importantly a legal issue, than confessing to poor quality of training, this issue is neither legal nor binding, unless someone chooses to take the college to court for its admitted incompetence. In the long legal battle, three attempts and training time will fly by after that it does not matter anymore.

    The admission of incompetence, achieved multiple positive objectives for the college:

    1. Divert the attention of EAG into commenting on subjects that the EAG has not mandate to inquire into and thus ensuring that the findings are neither binding nor legal.

    2.Correcting the flaws in the training program can be very effectively marketed using terms like “on going and continuous, welfare and equality, service improvement and quality assurance”, thus giving the college unlimited time with no accountability while any little change can be advertised under big banners and be presented as proof of action. “The examiners are getting better at writing questions”,” workshops are being conducted”. In plain language they are just not good enough, but you have to believe that the process delivers “specialists”. Sounds like the NBN.

    3.Ensuring insinuations like “registrars think they are FACEM’s”, “do not have the capacity to take time critical decisions” are published in the EAG and admitting that the feedback is poor, but is kind to trainees is a clever but despicable ploy to discredit the members of entire trainee cohort (CC and NC) who form the foundation of the emergency medicine service delivery.

    4.The college choose to raise doubts about its own flagship programs like ITA’s and raised questions about its feedback processes involved WBA’s and abandoned the FACEM’s and DEMT’s in regards to their ability to supervise and provide feedback, while providing an exit route for them by way of excuses for their failures. This is an attempt at cover-up of systematic issues and to provide simplistic answers to complex structural and cultural problems, while ultimately laying blame on the trainees.

    5.The college (is not the walls and buildings, but people who are responsible for running this institution) I am sure understands human psychology. Bad roll out of the new training program, inadequate training and supervision, under-resourced and ill-equipped trainers, poor feedback mechanism, questionable examination system, effect only those who do not pass the exam, the college knows this well. After all no one who is successful at the exam, albeit an admittedly flawed mechanism will ever complain. Fact of life. Just keep feeding the egos of those who are successful with seductive phrases like “higher FACEM level thinking” and at some stage people will start to believe that there is some such entity. This is reminiscent of the colonial policy of divide and rule, the colonial masters of south Asia provided local leaders with titles like “ raja bahadur” to indulge their ego, ensure their submission to the throne and empower them to control the vulnerable who put their faith in these local chieftains. Its sad, heartbreaking.

    2.Did the specialists with “higher” cognitive, managerial and leadership abilities entrusted with the responsibility for the training program not recognize the glaring flaws? Or did they just remain silent thus putting trainees and patients at risk? Did those in positions of responsibility fail in their duties? Its now more than apparent that they failed, how can they be held to account? Trainees have three attempts at the sub optimal exam, but those in the college have no accountability?

    3.Why did the FACEM’s and DEMT’s not voice their concerns about the poor roll out, the lack of information regarding the examination process and fight for a better system? Why did they just play along with a flawed system? Why did they market a deficient system to their trainees? Did they not owe it to their trainees and the community at large to pressure the college into correcting the lacunae in training, examination process, the ITA’s, WBA’s and supervision, instead of wasting the time and lives of their trainees?

    4.Is it not hypocrisy if not blatant deceit, to submit to the EAG that registrars are foolish enough to think they are FACEM’s and further state that registrars are not capable of taking time critical life changing decisions, when every one working in the field of emergency medicine knows that every night the registrars operate independently and safely, taking the so called time critical life changing decisions while FACEM’s are in bed? Is this expected of the role models ?

    5.Why is there a culture of silent acceptance of a flawed mechanism?

    6.The bigger question, now that we know the process is suboptimal, how can the community trust that those who were successful at the exam have the skills and ability to practice at a specialist level?

    Many contradictions need to be explained and addressed, I will quote one. The DEMT’s confess to not having the requisite information with regards to the training program and examinations, however could ascertain if the trainees weather trainees are ready or not for the examinations. Does not even seem logical. Or is it about gut feel.

    After years of training it’s hard to believe that the leaders and role models in the college hierarchy did a poor quality job. Every trainee has to look back and remember the years spent preparing for ITA’s, WBA’s and examinations, only to know now that no one cared and the whole thing is just a cruel joke. Policy was being made on the go.

    Weather or not there was discrimination is one issue, addressing the disparity in pass percentage with honesty will help the community in the long run. The clever verbiage, crafty insinuations and whistle blower shaming are fantastic tools in politics, not sure if its good for the service of the society. The fact that every aspect of the training program and assessment is deemed or confessed to be less than appropriate is simply heartbreaking. Years wasted.

    Way forward:

    The only people that we owe an explanation to are the vulnerable clients who seek our services in the emergency departments. We owe it to them. This is not just about bias, discrimination and examination results. It is about an incompetent system, which is delivering suboptimal training and hence cannot assure the quality of specialists it delivers. Its an intellectually flawed argument to state that the quality of training and examination is seriously flawed but those successful are competent and those who do not pass are incompetent, its absurd and unsafe.

    Those who are passionate about the specialty, patient care and training specialists of the future have to come together and find solutions. The college and the silent bystanders have failed the profession, trainees and the society. Lets get together and fix this mess. Remember the registrars may not be FACEM’s but they are not dumb, blind or deaf. They can recognize incompetence. The registrars can tell the difference between “under confident referral specialists” and “competent emergency physicians”. They can observe panic, stress, loss of situational awareness and cognitive chaos quite well. They don’t point out the incompetence, doesn’t mean they don’t know it. Please do not think that the registrars are not capable of observing things.

    One in every 3 people on the planet is either an Indian or Chinese, that’s the simple reality of life. Globalization is here to stay. People in the western suburbs of Melbourne may find an Australian doctor insensitive, culturally inappropriate or lacking in empathy. In a country of migrants expecting uniformity of cultural behavior is naive. Lets get our act together and help deliver the highest quality of care. We are clearly deficient now.

    Wars of perception and trials in social media will result only in loss of professional credibility.

    I will apologize for any unintended bias in my writing of which I have not found any significant evidence, following my independent analysis using an internationally valid tool called common sense.

    P.S. The reasons for disparity in the results are “multi-factorial” (a term I tend to use when I don’t have a clue of what’s happening with a patients presentation, makes me sound smart and always gets me out of trouble).

  8. EAG suggest refund for exam fee.They also found multiple faults in the traing why not refund the training fee to all over the years.It question the honesty .Supports bias thoery

  9. MODERATED (delayed action due to public complaint)

    Please see AMC social medial policy (reference below)

    Note on this comment by Andrew – I DO NOT know who posted this – I was a bit upset by it as you can see from my comment at the time, it has become apparent that this ‘w****hoax’ has offended others
    I did make it VERY clear that we needed professional decorum on this forum (excuse the rhyme) – with hindsight I have deemed that this comment did not stick to these rules

    Please note that you are medical professionals and breaching the AMC social media police

    Here is a summary:

    Troubleshooting: Have you ever … ?
    • Googled yourself? Search for your full name in Google, particularly ‘Australian
    Sites Only’ and ‘New Zealand Sites Only’. Do you feel comfortable with the
    results that are shown?
    • Posted information about a patient or person from your workplace on
    Facebook? Have a look through your old online posts and blogs;
    • Added patients as friends on Facebook or MySpace?
    • Added people from your workplace as friends?
    • Made a public comment online that could be considered offensive?
    • Become a member or fan of any group that might be considered racist, sexist,
    or otherwise derogatory? Browse through all the groups that you have joined
    and consider whether these are an accurate reflection of the person you are,
    and the values that you hold.
    • Put up photos or videos of yourself online that you would not want your
    patients, employers or people from your workplace to see?
    • Checked your privacy settings on Facebook or MySpace?
    • Felt that a friend has posted information online that may result in negative
    consequences for them? Did you let them know?

    Here is a link:
    https://ama.com.au/system/tdf/documents/Social_Media_and_the_Medical_Profession_FINAL_with_links_0.pdf?file=1&type=node&id=35198

    1. I find this comment disproportionate, offensive, and frankly upsetting. We have previously established that the groups are not Caucasian and non Caucasian ; so this us versus them attitude just does not fly with me. Channeling your energy into anger (I am judging this emotion based on reading your post) and false facts (the assumption that there is a White conspiracy amongst medical professionals) is unhealthy. You should consider moving on from where you are now for the sake of your health.

      Both Sam and whitehoax (the name says a lot) are missing the mark. To think anyone much above the lower echelons of government care about a few people passing or not passing a professional exam in a minor medical college, regardless of to controversy of the new exam process, is a fallacy. Even when we look at the effect that the exam change had on success rates of candidates trained in five multicultural countries (by no means exclusively white) versus all other countries including various colours people it is observational general data… which in no way proves causation. You can interpret this data emotionally from your personal perspective but it’s about as useful as a glove without a hand. You can make no firm conclusions other than the pass rate was lower. Which I accept needed to be investigated. The EAG was set up and came to startling conclusions.

      For the record I argue in a previous post that more solid data has pointed to bias in UK exams all other things being equal. So I think it is important that we improve our data collecting to get a clearer picture about exam performance.

      I think the biggest issue here (highlighted by the EAG) was the inevitability that such a big change would not come with unexpected consequences. The reality is that this change was required – assessments drive learning and the OSCE is driving trainees to learn to show there skills at a higher level of application than the old exam…. but it has come at a cost of trust in the murky statistical process and creating a new collection of exam and WBA games that we don’t yet fully understand

  10. Lets work together !

    At the very out set I would like to thank the IMG’s who brought many issues to the fore. And I congratulate them for their hard work, perseverance and dedication to the improvement of our profession. Hats off. I don’t think the press and the legal system should be the only means to get the college to acknowledge a problem.

    It’s been a week from the publication of the EAG, who’s read it? Who is discussing it? Who does it matter to? The answers to the first two are clearly only the IMG’s, to the third, it should matter to each one of our clients, our country and each one of us. 10 days ago we were presented a report that essentially states two points:

    1. That there was unintended bias. Debate on weather “bias is discrimination” is a distraction. That bias exist is known, the question is about quantum, abruptness and linkage to change.
    2. The entire training process is grossly flawed with no checks and balances and complete lack of transparency.

    So what’s been happening in the past 10 days? Looks like, It’s business as usual.

    Many people in the profession have worked in countries abroad, what would be the fall out of a post graduate medical training provider confessing to poor training, supervision and assessment, in the United States of America, United Kingdom, Canada (comparable health care systems) and India, Pakistan, China (non comparable systems). All of us are intelligent, we know the consequences for the training body in both the groups of countries. We seem to belong to the later group here.

    We have to confront the elephant in the room, the EAG report. The issue is the lack of system controls, checks and balances needs to be addressed. The control systems are deficient and the entire process was designed to be opaque thus making it substandard, be it the ITA’s,WBA’s, written examinations, the OSCE. The ITA’s are DEMT dependent, WBA’s, FACEM dependent, with no transparency or accountability, why could these not be 360 appraisals based to begin with like in the UK? The whole training document is a modified version from the UK College book, but when it came to administration of training, those responsible choose to create a system that has no checks or accountability. Its not like guidance from training programs in Australia or elsewhere is not available. Those who designed it did not think accountability or transparency need to exist. An examination system in which “no one knows what’s happening”, “no one knows what’s required” and there’s no feedback, but every one is complaint, reeks of a dictatorship with meek followers. All of us were silent spectators and ultimately failed our clientèle. Why was everyone silent?

    I would like to discuss this further, to find solutions, not to blame. We need to improve on many fronts and restore our now nonexistent credibility!

    Let’s begin

    1. WBA and ITA’s: the college says they are of questionable value and always have been. So where to from here ? keep doing the same thing and expect a different outcome! or start a time bound process of improvement. Another question, what about those whose careers were effected by these “pillars of training” people who got remediation or terminated from the training program? it’s time that the college publishes data regarding the WBA and ITA in relation to rankings, remediation and comments, tabulated based on candidates country of primary qualification. This will be a very useful statistical tool to identify those individuals in positions of responsibility who could be bellow the standards expected of a supervisor and compromised by prejudice or bias and in need of support to develop “FACEM level thinking”. If the college does not have the data, I am sure IMG networks can collect this data and provide it to the college; the same way information was collected about exam results. The following question is, when would the college apologize to all those who were victims of this flawed regime of ITA’s and WBA’s and compensate them for their loss while reworking their remediations ?

    2. Supervision and independent practice: Now that we know there are serious deficiencies in supervision and the fact that some of the DEMT’s and FACEM’s are of the opinion that trainees are incapable of taking “time critical independent, life changing” decisions, should we then encourage the specialists to kindly work on the floor and supervise the trainees over night in the interest of patient safety ? Or should we have a mechanism that can ensure the competency of senior trainees to manage emergency departments overnight. I do believe that the college should apologize to the vast majority of trainees who are competent and perfectly capable of handling complex medical issues safely while managing emergency medicine departments. These statements about trainee caliber are immature and unhelpful.

    3. Content of training: while the EAG did not comment on this, I think it’s appropriate to discuss it.

    a. 4.10: 100 % compliance. In this day and age evidence is available on the palm, uptodate, clinical key and so on. High quality research is now corporate style activity. Funding, data collection, managerial setup, administration and so on, this is now an entrepreneurial skill, for those who have research acumen, the wast majority are consumers of this evidence. Forcing this on every trainee is a fundamental waste of time, with no benefit. Those interested in research will do it anyway. Those who don’t, will depend on their technology that is widely and freely available and used by the overwhelming majority of the profession and still be able to deliver evidence based care.

    b. Managerial skills and leadership: it’s hypocrisy, to assert that, the training does not include any managerial experience but passing the exam is proof of leadership. The training program needs to include hands on managerial experience. It’s wishful to think that managerial and leadership skills can be acquired by reading a few pages in Dunn. Poor medical expertise training coupled with deficient managerial skills, leads to under – equipped FACEM’s and in-turn to poor leadership, managerial ability and prioritisation leading to poor patient out comes.

    c. The training program does not mandate that a trainee take on the responsibility of handling night shits independently but passing examinations puts these trainees at a “consultant” position with the responsibility to handle the pressures of the whole department. There are many times when, I feel extremely sorry for the under-confident / under-trained supervisors. I feel their pain. Their training failed them and they are forced to use their “consultant” position on trainees and inpatient registrars to cover-up for lack of medical expertise and leadership to initiate a cycle of abuse and acceptance of incompetent role models. I quote an incident, a FACEM was making a inappropriate referral to a senior inpatient registrar, following a long conversation, the registrar wondered “if FACEM was a title or an entitlement”. I think it is a qualification and we need to restore its credibility.

    d. Complaint handling: again reading texts and regurgitating phrases in the FACEM exam is not going deliver, especially in a multi cultural society. Is it not important for trainees to be involved in complaint handling and clinical governance processes? A portfolio of 10 complaints dealt with under supervision is a great learning tool. “Apologize without accepting liability” gets 100 % marks in FACEM exam, which is exactly what the college did, and I can see why.

    e. Medical expertise: I have never known that textbooks can substitute for hands on experience. It’s a fundamental failure of the training process that senior trainees in their last 6 months of training are learning aspects of Blood Gases analysis, reading ECG’s, toxicology and critical care pharmacology, while some of the Jr. FACEMS are using telephone APP’s for gasses, cardiology registrars for ECG analysis and toxicology consultation for common overdoses and ICU advise for resuscitation. Why are trainees not becoming competent and FACEM’s comfortable with these basic tools earlier on in their careers ? It’s a monumental failure of teaching, supervision and training program requirements.

    4. The other issue is of controls and quality assurance mechanisms in college assessment processes. The 2015.1 exam was compromised, following this multiple written and OSCE’s examinations were suboptimal for multiple reasons, so what’s changed? I am told that as late as 2017.2 the SAQ’s were of questionable utility. I spoke to multiple people who wrote the exam, those who passed don’t know why and those who did not are the same, the exam unfortunately is reduced to a gamble that, because no one knows anything, just like roulette. It seems hard to believe, that no one is clear of what’s expected or required, Let me expand:

    Complications of ECMO, LVAD and IABP, ultrasound findings in dresslars syndrome, ACEM criteria for SSW setups, thrombotic versus embolic ischemia, really! The fact that these made it to the paper is a perfect example of lack of controls. These questions reflect the examiners improper understanding of the requirements of an emergency physician, this lack of comprehension is likely a product of the poor training process that the college confesses to. I know many trainees who checked with their directors about the SSW question and the answer was “I don’t know, refer to website”, and in any case to set up a SSW like the second airport in Sydney is consultative process, not decided by a FACEM. It’s poor use of intellectual potential to be able to just regurgitate what’s available on the ACEM website which has no relevance to practice or managerial ability.

    The argument that “we are getting better” is silly, try telling your client, “your adverse outcome is because of our poor quality assurance mechanisms for the past many years, we knew of it, but did nothing, however we are getting better, so please let us continue doing the same thing till we are able to get our act together! Just trust us!”. When a program is rolled out, ideally it’s pretested, quality assured, open to suggestion and critique and self-reflective. There is no evidence to state that the college processes of program roll out followed any established principles of project management.

    Next, It is reasonable questions ask, why trainees are not able to use “consultant level language” after years of training? The answer is simple his / her trainers the consultants are not using this language as a routine. So this consultoid language component of the written exam is an artificial expectation purely for examination day utility. Conversations on the floor, with referrals or at hand-overs are not reflective of this so called “consultant terminology”. And that’s exactly why trainees are not developing this skill. Spending 6 months before the exam to learn these terms and then never use them once the exam is done, does not make any scene.

    We need to get together and get our act in order; our professional reputation and international standing are at stake, as is our duty to provide the best care to our clientèle. We have to start by identifying the factors and systems that led to the creation of this draconian and flawed mechanism, to make sure it never happens again.

    If I were to be a responsible senior managerial member of an organization that just confessed to poor control mechanisms, lack of transparency and accountability and tried discrediting the hardworking, dedicated front line staff, while potentially putting consumers at risk and found to have unintentional bias at product assessments, I would have done the following:

    1. Offer an unconditional apology, for each of the issues identified and state them.
    2. Brief the entire stakeholder community acknowledging each of the issues raised within 48 hours and apologize individually for each of the failures.
    3. Revoke remediation for the trainees and apologize.
    4. Revoke the limits on attempts at examination till such a time that the college is able to roll out a time tested, consistent, reliable and transparent training program that can competently deliver specialists to the society.
    5. Publish data regarding outcomes of ITA’s WBA’s and country of origin of candidates as soon as possible. Identify those supervisors needing support and setup a helping mechanism for their improvement.
    6. Set up priorities and time bound remedies, Publish questions and expected answers for all the written examinations conducted in the past, including OSCE stations with feedback within 3 weeks. WBA and ITA improvement within 6 weeks in consultation with other training programs in Australia and overseas. Mentorship program to recently qualified specialists to assist with medical expertise, leadership, management and supervision skills, with in 3 months. Clearly define the expectations of the fellowship exam, with workshops for DEMT’s within 3 months. Develop and publish resources for preparation for the fellowship examination in 3 months.
    7. Establish a robust trainee advocacy mechanism and IMG coordinator groups.

    I do not believe anyone is asking for special treatment, asking for procedural fairness and process improvement to ensure high quality specialist training is not a demand but an expectation.

Comments are closed.