Hostname: page-component-586b7cd67f-gb8f7 Total loading time: 0 Render date: 2024-11-25T00:29:39.525Z Has data issue: false hasContentIssue false

Good (or bad) vibrations: clinical intuition in violence risk assessment

Published online by Cambridge University Press:  02 January 2018

Rights & Permissions [Opens in a new window]

Summary

Assessment and management of the risk of violence in psychiatry inevitably and appropriately draws on emotionally laden ‘intuitive’ modes of thought, as well as deliberative analytic thinking. This article discusses the concept of ‘intuition’ and proposes a brief set of guidelines, derived from work by the cognitive psychologist Robin Hogarth, by which intuitive thinking may be applied by clinicians when undertaking risk assessment work. The guidelines are: consider the learning structure relevant to the risk assessment task; use your own emotions as a source of data; impose ‘circuit breakers' such as cost–benefit analyses and validated structured risk assessment tools; and use a narrative approach to develop formulations. The guidelines are intended to provide a framework for ongoing reflective practice in assessing and managing risk.

Type
Articles
Copyright
Copyright © The Royal College of Psychiatrists 2012 

Making decisions regarding risk is an integral part of clinical mental health work (Reference FlewettFlewett 2010). A plethora of tools and approaches (Reference Quinsey, Harris and RiceQuinsey 1998; Reference OttoOtto 2000; Reference Douglas and ReevesDouglas 2010a) now exists to assist with clinical judgement when undertaking this task in the context of risk of violence. The tremendous empirical advances in the field over the past two decades (Reference Douglas and ReevesDouglas 2010a) have seen a welcome increase in emphasis on systematic, structured approaches. In tandem with these developments, however, the role of ‘clinical intuition’ has been marginalised or even denigrated (Reference Quinsey, Harris and RiceQuinsey 1998). This article will assert that the intuitive mode of thought has considerable value for clinicians charged with the task of violence risk assessment, provided it is applied in a thoughtful and systematic way. It will outline practical guidelines for such application, derived from the work of cognitive psychologist Robin Hogarth.

Contemporary models of intuition

Psychologists have long distinguished between two modes of thinking, referred to here as ‘deliberative thinking’ and ‘intuitive thinking’ (intuition). The main differences between the two are shown in Table 1. Reference Betsch, Plessner, Betsch and BetschBetsch (2008: p. 4) defines intuition as:

TABLE 1 Characteristics of ‘intuitive’ and ‘deliberative’ thinking processes

‘a process of thinking. The input to this process is mostly provided by knowledge stored in long-term memory that has been primarily acquired via associative learning. The input is processed automatically and without conscious awareness. The output of the process is a feeling that can serve as a basis for judgement and decisions’ [my italics].

This definition does not explicitly include instinctive behaviours, although other theorists accept that intuitive responses can involve ‘a mix of innate and learned behaviour’ (Reference Hogarth, Plessner, Betsch and BetschHogarth 2008).

Central to this definition is the concept of a long-term knowledge store, based on a person’s experience of being exposed to a variety of stimuli and corresponding responses and outcomes. These associations are acquired experientially via observational learning and instrumental and classical conditioning (Reference Epstein, Plessner, Betsch and BetschEpstein 2008). There are two corresponding principles to such learning (Reference Hogarth, Plessner, Betsch and BetschHogarth 2008):

  1. observation of frequencies of events and objects in the environment and the extent to which they co-vary;

  2. the principle of reinforcement: that is to say, the positive (reward) or negative (cost) value of environmental phenomena.

Importantly, the output of the intuitive system is a feeling rather than a verbal proposition. Often this will involve at least a ‘faint whisper’ of emotion (Reference Slovic, Finucane and PetersSlovic 2004), indicating a specific quality such as goodness, badness or riskiness. However, less emotionally charged feelings such as the ‘feeling of knowing’ (Reference Liu, Yanjie and XuLiu 2007) can also occur. Such feelings are believed to help people to navigate quickly and efficiently through the complex and sometimes risky environments of everyday life. They have attracted a variety of terms, such as ‘vibes’ – defined as ‘vague feelings such as disquietude and agitation’ (Reference Epstein, Plessner, Betsch and BetschEpstein 2008).

An influential neurobiological model of decision-making – the ‘somatic marker hypothesis’ (Reference DamasioDamasio 1994) – similarly proposes that feelings arising from a longer-term knowledge store help to guide behaviour. This model posits a long-term knowledge store that links new stimuli to ‘somatic markers’ (internal mental representations marked by positive or negative feelings, linked to distinctive ‘somatic’ visceral states) that guide decisions advantageously and mostly unconsciously.

Research in cognitive psychology has confirmed that, depending on the predispositions of the person and the characteristics of the task at hand, some decisions are made following a predominantly effortful thinking process and others tend to be reached with more reliance on rapid, emotionally coloured ‘gut feelings’ (Reference Deutsch, Strack, Plessner, Betsch and BetschDeutsch 2008).

Most decision-making theorists now accept that optimal decision-making requires an integration of both deliberative and intuitive modes (Reference Baumeister, DeWall, Zhang, Vohs, Baumeister and LoewensteinBaumeister 2007; Reference Plessner, Czenna, Plessner, Betsch and BetschPlessner 2008). The two systems appear to operate in parallel and depend on each other for guidance (Reference Slovic, Finucane and PetersSlovic 2004), with the relative contribution of each system varying according to the situational demands and the person making the decision (Reference Epstein, Plessner, Betsch and BetschEpstein 2008). In practice, we can have reflective thoughts about our feelings and intuitive feelings about our thoughts, such that the two systems interact simultaneously and sequentially (Reference Epstein, Plessner, Betsch and BetschEpstein 2008). Various terms have been coined for the proper integration of both modes of thought to produce truly ‘rational’ decision-making, including the concept of ‘affective rationality’ (Reference Peters, Vastfjall and GarlingPeters 2006) and ‘the dance of affect and reason’ (Reference Finucane, Peters, Slovic, Schneider and ShanteauFinucane 2003).

Problems with intuition in risk assessment

Progress in violence risk assessment in clinical practice reflects the realisation that both modes of thinking have potential value. Methodologies were previously limited to either purely unstructured approaches (which gave clinical intuition free rein; Reference MonahanMonahan 1984) or to hyperrational actuarially based models (which sought to minimise the role of intuitive clinical judgements; Reference Quinsey, Harris and RiceQuinsey 1998). However, there is a third way, generally called ‘structured professional judgement’, that attempts to integrate the best features of each approach and has been shown to be both reliable and valid (Reference Douglas, Reeves, Otto and DouglasDouglas 2010b).

The emerging consensus in the decision-making literature is that integrated models incorporating both deliberative and intuitive reasoning are most appropriate to meet real-world challenges. However, the precise role of the intuitive system remains underdeveloped in the context of violence risk assessment. There are several possible reasons for this (Box 1).

BOX 1 Key problems with using intuition in violence risk assessment

  1. Validity and reliability: research suggests that the intuitive system lacks validity and reliability in at least some assessment contexts (e.g. longer-term assessments)

  2. Opacity: assessments based on intuition are opaque to evaluation by others

  3. Heuristic biases: human judgements are subject to various heuristic biases, such as the affect heuristic, racial prejudice, groupthink and cognitive dissonance

Concerns about validity and reliability

The first generation of empirical research looking at the capacity of clinicians to predict long-term ‘dangerousness’ of patients is notable for its findings that even experienced clinicians could do little better than chance in projecting long-term outcomes of patients with a history of violence (Reference MonahanMonahan 1984). Given the likelihood of vastly different experiences of different clinicians and hence different knowledge stores, this is unsurprising.Footnote a

Opacity

This was referred to by Reference MeehlMeehl (1954) as the aspect of so-called clinical expertise that is the most ‘irritating to non clinicians […] when asked for the evidence [the clinician] states simply that he feels intuitively that such and such is the case’. Such invidious usage of clinical experience has led to its being denigrated as a ‘prestigious synonym for anecdotal evidence’ (Reference Grove and MeehlGrove 1996). In the realm of violence risk assessment when defending opinions before tribunals or courts, such opacity renders expert opinions open to challenge on grounds of fairness and validity.

Heuristic biases

The vast research base on the propensity of human judgement to show certain predictable biases has now been extended to the clinical realm (Reference Ruscio, Lilienfeld and O'DonohueRuscio 2007), including violence risk assessment (Reference Slovic and MonahanSlovic 2000). Decisions made without a significant input from the deliberative system appear to be prone to bias. Various classes of such bias exist but the ‘affect heuristic’ (Reference Finucane, Alhakami and SlovicFinucane 2000; Reference Slovic, Finucane and PetersSlovic 2004) may be particularly relevant to violence risk assessment. The essence of this bias is that, just as imaginability, memorability and similarity can serve as misleading cues for probability judgements (the ‘availability’ and ‘representativeness’ heuristics), so affective signals can also mislead with respect to both probability and consequences of particular outcomes. For example, it has been shown that even experienced forensic mental health clinicians are biased towards a more conservative assessment of risk when given information in a format that encourages concrete images (which are more likely to engage the affective system) rather than in the abstract format of percentages (Reference Slovic and MonahanSlovic 2000). The unfortunate bias of racial prejudice has also been shown to engender misleading judgements of risk of violence (Reference McNiel and BinderMcNiel 1995; Reference Wittenbrink, Judd and ParkWittenbrink 1997). In addition, interpersonal influences may bias decisions, particularly in mental health contexts where team-working is the norm (as previously explored in this journal: see Reference CarrollCarroll 2009), by mechanisms such as ‘groupthink’ (Reference JanisJanis 1982) and ‘cognitive dissonance’ (Reference FestingerFestinger 1957).

Guidelines for using intuition

The intuitive system can lead to dangerously biased and prejudicial decisional outcomes. It is therefore unsurprising that pure unstructured clinical intuition is now rarely advocated as best practice for violence risk assessment in mental health. However, the widely advocated risk assessment approach of ‘structured professional judgement’ (Reference Webster and HuckerWebster 2007) must perforce incorporate the professional’s intuitive functioning. Therefore, the question is not ‘Can intuition play a role?’, but ‘How can the role of intuition be best utilised?’.

Ideally, deliberative and intuitive modes need to be carefully synthesised with relative emphases and inputs appropriate to the decision or task at hand. Preclinical laboratory-based research suggests that ‘either system can out-perform the other depending on the nature of the problem at issue’ (Reference Epstein, Plessner, Betsch and BetschEpstein 2008). Therefore, the task is how to apply reason to temper strong emotions engendered by some risk issues and how to infuse ‘needed doses of feeling’ into circumstances where lack of experience may result in decision-making that is too ‘coldly rational’ (Reference Slovic, Finucane and PetersSlovic 2004).

The structured professional judgement literature (e.g. Reference Webster, Douglas and EavesWebster 1997; Reference Douglas, Webster and EavesDouglas 2001; Reference Webster, Martin and BrinkWebster 2004) and associated training workshops (e.g. Reference OgloffOgloff 2011) tend to focus largely on the ‘structured’ aspect – the operationalising and rating of empirically derived risk factors – but provide less guidance for the ‘professional judgement’ aspect. The following section attempts to provide such guidance, based on proposals developed by Reference HogarthHogarth (2001), a cognitive psychologist who has extensively researched the functions and limitations of intuitive thinking. Essentially, his proposals comprise a set of guidelines by which the deliberative system may systematically be utilised to guide the intuitive system, helping to optimise overall decision-making processes. Each guideline (Box 2) will be considered in the context of violence risk assessment.

BOX 2 Guiding the intuitive system in decision-making

Consider the learning structure

  1. ‘Kind’ structures: large, representative samples; rapid feedback; outcomes tightly linked to decisions

  2. ‘Wicked’ structures: small, potentially biased samples; delayed feedback; outcomes affected by multiple other variables

Use your own emotions as data

  1. Interpersonal dynamics

  2. Calibration of your own estimates

Impose ‘circuit breakers’

  1. Cost–benefit analyses

  2. Structured professional judgement tools

Tell stories

  1. Develop formulations with:

  2. plausible hypotheses

  3. individualised theories

(After Reference HogarthHogarth 2001)

Guideline 1: Consider the learning structure

Laboratory-based research suggests that the intuitive system can yield accurate judgements for a specific decisional task, provided that the prior sample of experiences (on which the relevant knowledge base is founded) is representative for that task (Reference Betsch, Plessner, Betsch and BetschBetsch 2008). Hence, the context in which relevant information is learnt is critical to the quality of subsequent related intuitive judgements. Reference Hogarth, Plessner, Betsch and BetschHogarth (2008) refers to these contexts as ‘learning structures’ and proposes that ‘the validity of intuitions depends on the learning structures prevailing when these were acquired’. He distinguishes between ‘kind’ learning structures, which provide feedback for learners’ errors and lead to subsequent accurate intuitions, and ‘wicked’ learning structures, which fail to correct learners’ errors and hence lead to subsequent inaccurate intuitions.

Kind learning contexts have the following characteristics: large sample sizes (lots of relevant experience of decisions and outcomes), reasonably immediate feedback, outcomes that are tightly linked to the judgemental decisions made and experience of representative rather than biased samples. When feedback is relevant to the validity of judgements made, accurate learning follows automatically but where feedback is irrelevant, usually because it is affected by multiple confounding factors, then intuitive learning is not to be trusted (Reference Einhorn and WallstenEinhorn 1980).

Medium-to long-term risk assessment: a ‘wicked’ learning structure

With respect to the task of predicting violence, there is evidence that even very experienced clinicians tend to do poorly at predicting likelihood of violence in mental health patients in the medium to long term when using unstructured approaches to judgement (Reference Cocozza and SteadmanCocozza 1976; Reference MonahanMonahan 1981). This is unsurprising if we consider the relevant learning structure: clinical intuition about medium-term (weeks to months) risk of violence in the community is likely acquired in a ‘wicked’ learning context. Consider the clinician who makes a decision to discharge a patient based on an assessment of likelihood of violence over the following 3 months. Given the staffing models that prevail in most mental health services, the clinician may well not be privy to information regarding the subsequent violence of that specific patient. In addition, the role of variables such as the social context of the patient in the community means that the relationship between the discharge decision itself and violent behaviour is confounded by multiple factors beyond the clinician’s control. Furthermore, clinicians inevitably take action on the basis of their judgements (for example, they may not release patients who are considered to pose a very high risk), meaning that the learning environment is inevitably biased since clinicians are unable to learn from what might happen if such patients were released. The learning structure is ‘wicked’; an awareness of this fact will steer the clinician toward a greater emphasis on deliberative analytic approaches rather than a reliance on clinical intuition (even that of experienced staff) in making judgements.

Institutional violence risk assessment: a ‘kinder’ learning structure

The task of assessing the risk of institutional violence, particularly in the short term, involves rather different learning structures. Given the commonality of violence within hospitals (Reference Bowers, Allan and SimpsonBowers 2009), it is likely that clinicians with a reasonable experience of busy in-patient work will have acquired a fairly high sample size of violent outcomes from which their knowledge base can be constructed. Feedback on violent incidents is likely to reach the knowledge base of the clinician because it is likely to be received within hours or days of an assessment. These features are indicative of a relatively ‘kind’ learning structure. Correspondingly, there is indeed some evidence that for assessment of violence in institutional settings, unstructured clinical judgements can work reasonably well (Reference McNiel and BinderMcNiel 1995; Reference Fuller and CowanFuller 1999; Reference Hoptman, Yates and PatalinjugHoptman 1999). However, they may usefully be augmented by simple tools that impose some level of structure on clinicians’ intuitive sense of patients’ likelihood of imminent violence (Reference Ogloff and DaffernOgloff 2006; Reference Barry-Walsh, Daffern and DuncanBarry-Walsh 2009). Qualitative research also suggests that nursing response styles which are intuitive and ‘emergent’ and depend on understandings that have been ‘garnered from numerous clinical situations and varied patterns of escalations’ are most effective in assessing, and indeed managing, acute risk of violence in the in-patient context (Reference Finfgeld-ConnettFinfgeld-Connett 2009). Hence, there may be legitimate grounds for placing more emphasis on the role of the intuitive judgement of experienced staff members when determining the likelihood of imminent violence within an institutional context.

Guideline 2: Use your own emotions as a source of data

Feelings or ‘vibes’ generated by the intuitive system, Reference Hogarth, Plessner, Betsch and BetschHogarth (2008) asserts, can usefully be treated simply as part of the informational matrix to be considered in any decisional context: ‘rather than ignoring or trusting one’s emotions blindly, I believe it is best to treat emotions as data [that are] just part of the data that should be considered’.

In the context of violence risk assessment, it is useful to consider two distinct kinds of such data to which the clinician will have access: emotions about patients that arise in the context of the therapeutic relationship; and emotions about the accuracy or otherwise of the clinician’s own risk assessment. These will be considered in turn.

Interpersonal dynamics in therapeutic relationships

A basic tenet of psychodynamic therapeutic practice is that clinicians can develop a deeper understanding of their patients by using their own emotions about the patient as a guide. Such emotions are often considered under the rubric of ‘countertransference’ and awareness of the transference/countertransference relationship ‘allows reflection and thoughtful response rather than unthinking reaction from the doctor’, as this journal has reported (Reference Hughes and KerrHughes 2000).

The academic psychologist Paul Meehl, who is generally best remembered for his strong advocacy of actuarial methods of decision-making, nonetheless recognised that determining the significance of the interpersonal dynamics evident in a clinical relationship is a task that is inevitably principally handled by the intuitive rather than the deliberative system (Reference MeehlMeehl 1954):

‘In a therapeutic handling of the case, it is impossible for the clinician to get up in the middle of the interview, saying to the patient, “leave yourself in suspended animation for 48 hours. Before I respond to your last remark it is necessary for me to do some work on my calculating machine” […] at the moment of action in the clinical interview the appropriateness of the behaviour will depend in part upon things which are learnable only by a multiplicity of concrete experiences and not by formal didactic exposition’ [my italics] (pp. 81–82).

Thus, he recognised that informed clinical experience is essential to a sophisticated understanding and management of interpersonal dynamics in the therapeutic context.

Interpersonal style

Empirical consideration of the role of clinicians’ emotions as data, however, requires a theoretical framework to provide a basis for the operational definition of such emotions. The concept of ‘interpersonal style’, based on Kiesler’s theory of interpersonal behaviour (Reference Kiesler, Schmidt, Wagner, Plutchik and ConteKiesler 1997), may be used to provide such a framework. Interpersonal style refers to how individuals characteristically relate to each other and how people perceive themselves in relation to others. Although it is considered to be a characteristic of an individual, people react to their interpersonal environment and in turn influence the relationships that others have with them. Such interactions can be considered and understood in relation to the core dimensions of ‘power’ or ‘status’ (which ranges from dominance to submission) and ‘affiliation’ (which ranges from hostility to friendliness) (Fig. 1) (Reference LearyLeary 1957; Reference Kiesler, Schmidt, Wagner, Plutchik and ConteKiesler 1997). Behaviour generally elicits a corresponding response from the affiliation dimension (i.e. friendliness elicits friendliness, hostility elicits hostility), but reciprocal responses from the control dimension (i.e. dominance elicits submission and vice versa).

FIG 1 The interpersonal circumplex (after Reference Kiesler, Schmidt, Wagner, Plutchik and ConteKiesler 1997).

An understanding of interpersonal style and of the corresponding countertransference feelings can help clinicians to avoid overly reactive decision-making, which is sometimes prompted by emotional responses to their patients. Such reactivity is generally recognised to lead to an inefficient decision-making strategy in interpersonal settings (Reference Baumeister, DeWall, Zhang, Vohs, Baumeister and LoewensteinBaumeister 2007).

The value of emotional responses

A careful consideration by clinicians of the informational value of their own emotional responses can improve clinical judgement and decision-making. In the context of violence risk assessments, understanding interpersonal style (by tapping into the clinician’s own emotional reactions in the interpersonal encounter) may help to indicate the likelihood of future interpersonal conflict. It may also give clues as to particular dynamic ‘risk signatures’ (as has been explored in this journal; Reference Reiss and KirtchukReiss 2009) or the ‘relevance’ (Reference Douglas, Reeves, Otto and DouglasDouglas 2010b) of specific interpersonal interactions for a particular person. For example, the interpersonal style of a narcissist, marked by ‘dominance’, may be dangerously sensitive to even minor slights, which are perceived as intensely humiliating (Reference NestorNestor 2002).

Research tools

The Chart of Interpersonal Reactions in Closed Living Environments (CIRCLE; Reference BlackburnBlackburn 1998) and the Impact Message Inventory – Circumplex (IMI–C; Reference Kiesler and SchmidtKiesler 2006) are empirical measures, comprising structured questionnaires filled in by clinicians, that can rate a patient’s interpersonal style based on the clinician’s emotional responses to the patient. Assessment of interpersonal style using these tools shows good interrater reliability (Reference BlackburnBlackburn 1998; Reference Kiesler and SchmidtKiesler 2006) and research has suggested that interpersonal style can indeed add to the predictive validity of violence risk assessment, at least in institutional settings. A study that used the CIRCLE to evaluate forensic in-patients (predominantly with psychotic illnesses) found that dominant, hostile and coercive interpersonal styles were associated with a higher risk of violent behaviour over the following 3 months, whereas a compliant interpersonal style was protective against subsequent violence (Reference Doyle and DolanDoyle 2006). Another forensic in-patient study using the CIRCLE (Reference Daffern, Tonkin and HowellsDaffern 2010a) found that a coercive interpersonal style was associated with increased risk of aggression and self-harm during the following 6 months. A study of patients admitted to acute psychiatric units found that a hostile, dominant interpersonal style measured using the IMI–C predicted violence during the subsequent in-patient stay more strongly than did either acute psychiatric symptoms or patients’ perception of being coerced (Reference Daffern, Thomas and FergusonDaffern 2010b).

Utilising emotions

The systematic use of a clinician’s own emotions as a data source may represent something of a rapprochement between psycho-dynamic and more mainstream approaches to risk assessment. This does not, however, legitimise a return to a pure unstructured approach. Some psychodynamically oriented critics of systematic, empirically grounded approaches to risk assessment have wrongly seen the emphasis on structure as having a defensive quality that militates against genuine understanding of the human interactions involved (Reference MurphyMurphy 2002; Reference DoctorDoctor 2004). Others, however, have recognised that there is in fact no such conflict and that ‘with experience, clinicians begin to recognise countertransferential responses in certain types of clinical situations and, as the likelihood of acting on them reduces, these responses can be utilised as tools both in helping to make the diagnosis, but also to assist in exploring the meaning of the risk’ (Reference FlewettFlewett 2010).

Reference MeehlMeehl (1954) draws a crucial distinction between the relevant facts, which may include ‘immediate impressionistic clinical judgements’ (in which the intuitive subjective clinical impression is treated as a type of fact – the clinician being ‘a testing instrument of a sort’) and methods of combination; such methods, he asserts, can be ‘actuarial’ even for such emotionally based data. He argues that the value of such facts in predicting outcomes is an empirical question to be answered scientifically: ‘it is still an open question whether the fact that the patient acts hostile or dominant ought to be given the weight that the clinician gives it at arriving at his predictions’ [my italics]. Studies such as those described above are beginning, half a century after Meehl’s writing, to empirically address this question.

Emotions about accuracy of judgements

Clinicians also have an emotional sense of the likely accuracy of their own judgement – a level of confidence in their own estimates. ‘Calibration’ refers to the relationship between a clinician’s sense of confidence and subsequent accuracy of their assessment decisions: if clinicians are highly confident about a violence risk assessment and this turns out to be accurate, but less confident about predictions that turn out to be inaccurate, then they are well calibrated. Research in this area is limited. US studies have shown positive relationships between accuracy and confidence for both unstructured (Reference McNiel, Sandberg and BinderMcNiel 1998) and structured (Reference Douglas and OgloffDouglas 2003) professional judgement approaches. However, other studies, such as that by Reference De Vogel and De RuiterDe Vogel & De Ruiter (2004) in the Netherlands, have found that treating clinicians may be overconfident and less accurate in risk assessments involving their own patients. A Canadian study using the Short-Term Assessment of Risk and Treatability (START; Reference Webster, Martin and BrinkWebster 2004) – a structured professional judgement tool – failed to show significant correlations between confidence and accuracy with respect to violence risk assessment (Reference Desmarais, Nicholls and ReadDesmarais 2010). This will be an important area for further research in risk assessment.

Guideline 3: Impose ‘circuit breakers’

Hogarth asserts that the potentially deleterious effect of heuristic biases can be minimised by explicitly and deliberately using ‘circuit breakers’. By this, he refers to explicit processes that stop automatic decisional processing in its tracks and encourage the decision maker to incorporate deliberative reasoning. One such circuit breaker is the well-established process of ‘cost–benefit analysis’: a process that compares options by systematically comparing their possible costs and benefits to determine which is optimal.

When assessing and managing clinical risk, commonly there is an array of competing factors to be considered in this way (Reference Miller, Tabakin and SchimmelMiller 2000), such as:

  1. How will this look if the worst happens?

  2. What are the likely outcomes in the longer term as opposed to the short term?

  3. What is the scientific evidence base and how similar is this patient to patients in that evidence base?

  4. What resources are available?

  5. What are the legal and ethical issues involved?

It is immediately apparent that these competing considerations cover a range of different domains, including the clinical, legal, ethical and political. It has been proposed (Reference Montague and BernsMontague 2002; Reference Peters, Vastfjall and GarlingPeters 2006) that the human decision-making system, when considering pros and cons from such a panoply of different domains, uses affect as a kind of ‘common currency’. This allows us to compare the values of very different decision options: when comparing ‘apples with oranges’, each is first translated to a certain number of ‘affective units’ that may be of positive or negative valence. Reference Peters, Vastfjall and GarlingPeters (2006) describes this process as follows:

‘by translating more complex thoughts into simpler affective evaluations, decision makers can compare and integrate good and bad feelings rather than attempt to make sense out of a multitude of conflicting logical reasons. This function is thus an extension of the affect-as-information function into more complex decisions that require integration of information’.

Hence, the cost–benefit matrix (see Reference CarrollCarroll 2009), a deceptively simple tool, represents an integration of two approaches: the deliberative (systematically ensuring consideration of all logically relevant data) and the intuitive (facilitating the translation of the informational matrix into an overall affective output, favouring one decision pathway over another).

As an example, Table 2 shows a simplified cost–benefit matrix for the clinical dilemma of whether to compulsorily admit to hospital a man with schizophrenia who has disclosed the emergence of violent command hallucinations. A structured, evidence-based approach is used to ensure that all relevant factors are considered. For the purposes of illustration, each consideration has been given an arbitrary score in ‘affective units’. In this example, the clinician’s overall ‘feel’ is in favour of admission to hospital.

TABLE 2 Simplified cost–benefit matrix for deciding whether to compulsorily admit a man with schizophrenia

Violence risk assessment tools

Another circuit breaker is the violence risk assessment tool. The superiority of such tools over unstructured, purely intuitive clinical approaches has now been comprehensively demonstrated (Reference Douglas, Reeves, Otto and DouglasDouglas 2010b). There are, no doubt, various reasons for this, not least that they force the clinician to focus on items of predictive validity and ignore invalid factors. However, they are generally seen not as wholesale replacements for clinical judgement, but as decision support tools (Reference Monahan, Steadman and SilverMonahan 2001). Risk assessment tools can therefore be appropriately used in conjunction with cost–benefit analyses. The complexity of real-world risk management decisions means that simply allocating patients to a category of low, medium or high risk is generally only the start of the decisional task and rarely in itself will be determinative: the pros and cons of the various management options (such as coercive psychiatric care) still need to be systematically considered. This nuanced approach to risk assessment and management is now recognised in the more sophisticated critiques of the state of the field (Reference MossmanMossman 2006), but is unfortunately absent from some polemics (Reference Ryan, Nielssen and PatonRyan 2010).

Guideline 4: Tell stories

Another way in which the intuitive mode of thought may be harnessed is by the conscious and deliberate use of narrative to ‘make connections that would not be suggested by more logical modes of thought’ (Reference HogarthHogarth 2001). The development of coherent narratives to better make sense of patients’ predicaments, although hardly a new approach, is enjoying a renaissance in clinical psychiatry (Reference LewisLewis 2011). This task is generally performed under the broad rubric of formulation: ‘distilling a clinical case into an explanatory summary with a high “signal to noise” ratio’ (Reference Reilly and NewtonReilly 2011). There are multiple models for clinical formulation (Reference WeerasekeraWeerasekera 1993; Reference Ward, Nathan and DrakeWard 2000; Reference SummersSummers 2003; Reference Sim, Gwee and BatemanSim 2005) but an element common to all is to ‘highlight possible linkages or connections between different aspects of the case’ with the aim that ‘the focus upon these inter-relationships adds something new’ (Royal Australian and New Zealand College of Psychiatrists 2011).

High-quality formulations invariably involve speculation and the creative generation of hypotheses that draw on clinical intuition: which elements of the patient’s narrative ‘feel’ to be central elements in the story and which ‘feel’ more peripheral? Distinguishing ‘signal’ from ‘noise’ is often a challenge and should draw on the clinician’s intuitive sense of the case: a good formulation ‘feels right’ for that particular patient.

Create plausible hypotheses

Such intuitive thinking however, needs to be complemented by rigorous and systematic use of deliberative reasoning in at least two ways. In drawing up the initial formulation, the hypotheses need to at least be plausible and grounded in evidence: for example, drawing links between developmental trauma or genetic background and current psychopathology is acceptable, whereas speculations linking the patient’s astrological star sign to their personality features is not. Thus, the task requires the application of empirical knowledge (derived from group-based data) to make sense of the predicament of the individual patient. Subsequently, the formulation needs to be tested for explanatory and predictive power in clinical work with the patient and revised accordingly. This iterative process, drawing on both intuitive and deliberative processes, mirrors the essence of the ‘scientific method’ in general (Reference MedawarMedawar 1984).

Reference MeehlMeehl (1954) supported the role of clinical intuition in this sense when discussing the creation of psychotherapeutic interpretations. He viewed these as preliminary hypotheses to be tested, asserting:

‘it is dangerous to require that, in the process of hypotheses creation, i.e., in the context of discovery, a set of rules or principles […] is a necessary condition for rationality. What should be required is that a hypothesis, once formulated, should be related to the facts in an explicit, although perhaps very probabilistic way’ (p. 73).

Individualised theories

In a structured professional judgement (SPJ) risk assessment, it is at the formulation stage that the rather prosaically derived set of risk factors is put together into a meaningful story – a unique pattern with individualised meaning for that patient and for that particular risk assessment challenge. The individual’s life and pattern of violent behaviour – the relevant aspects of their story – may be considered in terms of plots and themes that are common in those with a propensity for violent behaviour. For example, themes of abandonment as a young child may be related to a later pattern of aggressive over-controlling behaviour in relationships. Within this narrative, some risk factors have greater relevance to individuals than others. Thus, ‘the SPJ process encourages decision makers to build “individual theories” of violence for each person they evaluate. It may facilitate the identification of “configural relations” between a set of risk factors and violence, one in which risk factors might not firmly interact with one other, but may transact with one another, and with violence’ (Reference Douglas, Reeves, Otto and DouglasDouglas 2010b: p. 174). Such ‘individual theories’ will guide the individualised management plan that should follow on from any risk assessment.

Conclusions

Although sometimes portrayed as a purely analytical process, the optimal assessment and management of risk of violence in psychiatry also draws on more emotionally laden ‘intuitive’ modes of thought. Clinicians have much to gain from playing to the different strengths of the intuitive and deliberative systems while avoiding their respective pitfalls. Simple awareness of these pitfalls, although necessary, is not sufficient (Reference PlousPlous 1993). Hogarth’s guidelines for ‘educating intuition’, although not specifically developed with clinical tasks in mind, provide useful pointers for clinicians when employing their intuitive skills.

When assessing risk of violence, psychiatrists would do well to heed Hogarth’s advice that professional decision makers learn to ‘manage their thought processes actively’ (Reference HogarthHogarth 2001). This requires time and effort to understand their own thinking and feeling processes, as previously examined in this journal (Reference Reiss and KirtchukReiss 2009). Importantly, this is not a ‘one-off’ learning exercise, but should be at the heart of an ongoing reflective learning process. Clinicians need to heed the words of a psychodynamic psychiatrist working in the National Health Service:

‘Emotional literacy can be developed but it can also be lost, particularly when subject to the cumulative psychic assaults that are quotidian in mental health work. Emotional literacy involves continuing self reflection and this can be eroded by the demands of functioning in a system which calls more for action than for thought’ (Reference JohnstonJohnston 2010).

MCQs

Select the single best option for each question stem

  1. 1 Epstein describes the intuitive mode of thinking as:

    1. a rational

    2. b slow

    3. c resistant to change

    4. d more highly differentiated nuanced thinking

    5. e abstract.

  2. 2 ‘Kind’ learning contexts have the following characteristics:

    1. a smaller sample sizes

    2. b outcomes that are tightly linked to the judgemental decisions

    3. c delayed feedback

    4. d unrepresentative samples

    5. e low calibration.

  3. 3 A dominant interpersonal style on the part of a patient:

    1. a is related to a lower risk of violent behaviour

    2. b elicits corresponding feelings of dominance in the clinician

    3. c can be measured using the Short-Term Assessment of Risk and Treatability (START)

    4. d is a measure of affiliation

    5. e may indicate a specific ‘risk signature’.

  4. 4 Structured professional judgement approaches to violence risk assessment:

    1. a attempt to negate the role of clinical judgement

    2. b have been shown to have good reliability and validity

    3. c make cost–benefit analyses superfluous

    4. d always require the Historical-Clinical-Risk Management–20 (HCR–20) to be completed

    5. e indicate whether coercive treatment is indicated.

  5. 5 When carrying out a violence risk assessment, countertransference:

    1. a is impossible to test empirically

    2. b should be ignored, to assure maximum reliability and validity

    3. c may be a useful source of data

    4. d is reliably measured by the HCR–20

    5. e should be quickly acted on by the clinician.

MCQ answers

1 c 2 b 3 e 4 b 5 c

Footnotes

a. It is also possible, of course, that interrater differences about the likelihood of violence reflect idiosyncratic differences with respect to their own affective reactions to the same data.

References

Barry-Walsh, J, Daffern, M, Duncan, S et al (2009) The prediction of imminent aggression in patients with mental illness and/or intellectual disability using the Dynamic Appraisal of Situational Aggression instrument. Australasian Psychiatry 17: 493–6.CrossRefGoogle ScholarPubMed
Baumeister, RF, DeWall, CN, Zhang, L (2007) Do emotions improve or hinder the decision making process? In Do Emotions Help or Hurt Decision Making? A Hedgefoxian Perspective (eds Vohs, KD, Baumeister, RF, Loewenstein, G) 1131. Russel Sage Foundation.Google ScholarPubMed
Betsch, T (2008) The nature of intuition and its neglect in research on judgment and decision making. In Intuition in Judgment and Decision Making (eds Plessner, H, Betsch, C, Betsch, T) 322. Lawrence Erlbaum Associates.Google Scholar
Blackburn, R (1998) Criminality and the interpersonal circle in mentally disordered offenders. Criminal Justice and Behavior 25: 155–76.Google Scholar
Bowers, L, Allan, T, Simpson, A et al (2009) Identifying key factors associated with aggression on acute inpatient psychiatric wards. Issues in Mental Health Nursing 30: 260–71.Google Scholar
Carroll, A (2009) How to make good-enough risk decisions. Advances in Psychiatric Treatment 15: 192–8.Google Scholar
Cocozza, JJ, Steadman, HJ (1976) The failure of psychiatric predictions of dangerousness: clear and convincing evidence. Rutgers Law Review 29: 1084–101.Google Scholar
Daffern, M, Tonkin, M, Howells, K et al (2010a) The impact of interpersonal style and perceived coercion on aggression and self-harm in personality-disordered patients admitted to a secure psychiatric hospital. Journal of Forensic Psychiatry and Psychology 21: 426–45.CrossRefGoogle Scholar
Daffern, M, Thomas, S, Ferguson, M et al (2010b) The impact of psychiatric symptoms, interpersonal style and coercion on aggression and self-harm during psychiatric hospitalisation. Psychiatry: Interpersonal and Biological Processes 73: 365–81.CrossRefGoogle Scholar
Damasio, AR (1994) Descartes' Error: Emotion, Reason, and the Human Brain. Putnam.Google Scholar
De Vogel, V, De Ruiter, C (2004) Differences between clinicians and researchers in assessing risk of violence in forensic psychiatric patients. Journal of Forensic Psychiatry and Psychology 15: 145–64.CrossRefGoogle Scholar
Desmarais, SL, Nicholls, T, Read, JD et al (2010) Confidence and accuracy in assessments of short-term risks presented by forensic psychiatric patients. Journal of Forensic Psychiatry and Psychology 21: 122.Google Scholar
Deutsch, R, Strack, F (2008) Variants of judgment and decision making. The perspective of the reflective-impulsive model. In Intuition in Judgment and Decision Making (eds Plessner, H, Betsch, C, Betsch, T) 3953. Lawrence Erlbaum Associates.Google Scholar
Doctor, R (2004) Psychodynamic lessons in risk assessment and management. Advances in Psychiatric Treatment 10: 267–76.Google Scholar
Douglas, K, Webster, CD, Eaves, D et al (2001) HCR-20 Violence Risk Management Companion Guide. Mental Health Law and Policy Institute, Simon Fraser University.Google Scholar
Douglas, KS, Ogloff, JRP (2003) The impact of confidence on the accuracy of structured professional and actuarial violence risk judgments in a sample of forensic psychiatric patients. Law and Human Behavior 27: 573–87.Google Scholar
Douglas, KS, Reeves, KA (2010a) Handbook of Violence Risk Assessment. Taylor and Francis.Google Scholar
Douglas, KS, Reeves, KA (2010b) Historical-Clinical-Risk Management–20 (HCR–20) Violence Risk Assessment Scheme. In Handbook of Violence Risk Assessment (eds Otto, RK, Douglas, KS) 147–86. Taylor and Francis.Google Scholar
Doyle, M, Dolan, M (2006) Evaluating the validity of anger regulation problems, interpersonal style, and disturbed mental state for predicting inpatient violence. Behavioral Sciences and the Law 24: 783–98.Google Scholar
Einhorn, HJ (1980) Learning from experience and suboptimal rules in decision making. In Cognitive Processes in Choice and Decision Behavior (ed Wallsten, T) 120. Lawrence Erlbaum Associates.Google Scholar
Epstein, S (2008) Intuition from the perspective of cognitive–experiential self-theory. In Intuition in Judgment and Decision Making (eds Plessner, H, Betsch, C, Betsch, T) 2337. Lawrence Erlbaum Associates.Google Scholar
Festinger, L (1957) A Theory of Cognitive Dissonance. Stanford University Press.Google Scholar
Finfgeld-Connett, D (2009) Model of therapeutic and non-therapeutic responses to patient aggression. Issues in Mental Health Nursing 30: 530–7.CrossRefGoogle ScholarPubMed
Finucane, ML, Alhakami, A, Slovic, P et al (2000) The affect heuristic in judgments of risks and benefits. Journal of Behavioral Decision Making 13: 117.Google Scholar
Finucane, ML, Peters, E, Slovic, P (2003) Judgment and decision making: The dance of affect and reason. In Emerging Perspectives on Judgment and Decision Research (eds Schneider, SL, Shanteau, J) 327–64. Cambridge University Press.Google Scholar
Flewett, T (2010) Clinical Risk Management. Churchill Livingstone.Google Scholar
Fuller, J, Cowan, J (1999) Risk assessment in a multidisciplinary setting: Clinical judgement revisited. Journal of Forensic Psychiatry 10: 276–89.Google Scholar
Grove, W, Meehl, P (1996) Comparative efficiency of informal (subjective, impressionistic) and formal (mechanical, algorithmic) prediction procedures: the clinical-statistical controversy. Psychology, Public Policy, and Law 12: 293323.CrossRefGoogle Scholar
Hogarth, RM (2001) Educating Intuition. University of Chicago.Google Scholar
Hogarth, RM (2008) On the learning of intuition. In Intuition in Judgment and Decision Making (eds Plessner, H, Betsch, C, Betsch, T) 91105. Lawrence Erlbaum Associates.Google Scholar
Hoptman, MJ, Yates, KF, Patalinjug, MB et al (1999) Clinical prediction of assaultive behavior among male psychiatric patients at a maximum-security forensic facility. Psychiatric Services 50: 1461–6.Google Scholar
Hughes, P, Kerr, I (2000) Transference and countertransference in communication between doctor and patient. Advances in Psychiatric Treatment 6: 5764.Google Scholar
Janis, IL (1982) Groupthink: Psychological Studies of Policy Decisions and Fiascoes (2nd edn). Houghton Mifflin.Google Scholar
Johnston, J (2010) Old ways of working. Newsletter: Psychotherapy Faculty, Royal College of Psychiatrists (Spring): 69 (http://www.rcpsych.ac.uk/docs/psyfacNewsletterSpring2010.doc).Google Scholar
Kiesler, DJ, Schmidt, JA, Wagner, CC (1997) A circumplex inventory of impact messages: An operational bridge between emotional and interpersonal behavior. In Circumplex Models of Personality and Emotions (eds Plutchik, R, Conte, HR) 221–44. American Psychological Association.Google Scholar
Kiesler, DJ, Schmidt, JA (2006) The Impact Message Inventory–Circumplex (IMI-C) Manual. Mind Garden Inc.Google Scholar
Leary, T (1957) Interpersonal Diagnosis of Personality. Ronald Press.Google Scholar
Lewis, B (2011) Narrative Psychiatry: How Stories Can Shape Clinical Practice. Johns Hopkins University Press.Google Scholar
Liu, Y, Yanjie, S, Xu, G et al (2007) Two dissociable aspects of feelings of knowing: Knowing that you know and knowing that you do not know. Quarterly Journal of Experimental Psychology 60: 672–80.Google Scholar
McNiel, D, Binder, R (1995) Correlates of accuracy in the assessment of psychiatric inpatients' risk of violence. American Journal of Psychiatry 152: 901–6.Google ScholarPubMed
McNiel, D, Sandberg, DA, Binder, R (1998) The relationship between confidence and accuracy in clinical assessment of psychiatric patients' potential for violence. Law and Human Behavior 22: 655–69.CrossRefGoogle ScholarPubMed
Medawar, P (1984) Pluto's Republic: Incorporating The Art of the Soluble and Induction and Intuition in Scientific Thought. Oxford University Press.Google Scholar
Meehl, PE (1954) Clinical versus Statistical Prediction. University of Minnesota.Google Scholar
Miller, MC, Tabakin, R, Schimmel, J (2000) Managing risk when risk is greatest. Harvard Review of Psychiatry 8: 154–9.Google Scholar
Monahan, J (1981) The Clinical Prediction of Violent Behaviour. U.S. Government Printing Office.Google Scholar
Monahan, J (1984) The prediction of violent behavior: toward a second generation of theory and policy. American Journal of Psychiatry 141: 10–5.Google Scholar
Monahan, J, Steadman, HJ, Silver, E et al (2001) Rethinking Risk Assessment: The MacArthur Study of Mental Disorder and Violence. Oxford University Press.CrossRefGoogle Scholar
Montague, RP, Berns, GS (2002) Neural economics and the biological substrates of valuation. Neuron 36: 265–84.Google Scholar
Mossman, D (2006) Critique of pure risk assessment or, Kant meets Tarasoff. University of Cincinnati Law Review 75: 523609.Google Scholar
Murphy, D (2002) Risk assessment as collective clinical judgement. Criminal Behaviour and Mental Health 12: 169–78.Google Scholar
Nestor, PG (2002) Mental disorder and violence: personality dimensions and clinical features. American Journal of Psychiatry 159: 1973–8.Google Scholar
Ogloff, JRP, Daffern, M (2006) The dynamic appraisal of situational aggression: An instrument to assess risk for imminent aggression in psychiatric inpatients. Behavioral Sciences and the Law 24: 799813.Google Scholar
Ogloff, JRP (2011) Forensic Mental Health Spring and Autumn Institutes. Workshop 1: Assessing Risk for Violence Using the HCR 20. Centre for Forensic Behavioural Science/Monash University.Google Scholar
Otto, RK (2000) Assessing and managing violence risk in outpatient settings. Journal of Clinical Psychology 56: 1239–62.Google Scholar
Peters, E, Vastfjall, D, Garling, T et al (2006) Affect and decision making: a ‘hot’ topic. Journal of Behavioral Decision Making 19: 7985.Google Scholar
Plessner, H, Czenna, S (2008) The benefits of intuition. In Intuition in Judgment and Decision Making (eds Plessner, H, Betsch, C, Betsch, T) 251–65. Lawrence Erlbaum Associates.Google Scholar
Plous, S (1993) The Psychology of Judgment and Decision Making. McGraw-Hill.Google Scholar
Quinsey, VL, Harris, GT, Rice, ME et al (1998) Violent Offenders: Appraising and Managing Risk. American Psychological Association.Google Scholar
Reilly, J, Newton, R (2011) Formulation: a proposal for a more structured, longitudinal approach. Australasian Psychiatry 19: 301–5.Google Scholar
Reiss, D, Kirtchuk, G (2009) Interpersonal dynamics and multidisciplinary teamwork. Advances in Psychiatric Treatment 15: 462–9.CrossRefGoogle Scholar
Royal Australian and New Zealand College of Psychiatrists (2011) Formulation Guidelines for Candidates: Trainee Clinical Examination. RANZCP.Google Scholar
Ruscio, J (2007) The clinician as subject: Practitioners are prone to the same judgment errors as everyone else. In The Great Ideas of Clinical Science (eds Lilienfeld, SO, O'Donohue, WT) 2948. Routledge.Google Scholar
Ryan, C, Nielssen, O, Paton, M et al (2010) Clinical decisions in psychiatry should not be based on risk assessment. Australasian Psychiatry 18: 398403.Google Scholar
Sim, K, Gwee, KP, Bateman, A (2005) Case formulation in psychotherapy: Revitalizing its usefulness as a clinical tool. Academic Psychiatry 29: 289–92.Google Scholar
Slovic, P, Monahan, J (2000) Violence risk assessment and risk communication: The effects of using actual cases, providing instruction, and employing probability versus frequency formats. Law and Human Behavior 24: 271–96.Google Scholar
Slovic, P, Finucane, ML, Peters, E et al (2004) Risk as analysis and risk as feelings: Some thoughts about affect, reason, risk and rationality. Risk Analysis 24: 311–22.CrossRefGoogle ScholarPubMed
Summers, RF (2003) The psychodynamic formulation updated. American Journal of Psychotherapy 57: 3951.Google Scholar
Ward, T, Nathan, P, Drake, CR et al (2000) The role of formulation-based treatment for sexual offenders. Behaviour Change 17: 251–64.Google Scholar
Webster, CD, Douglas, KS, Eaves, D et al (1997) HCR–20: Assessing Risk for Violence (v 2). Simon Fraser University.Google Scholar
Webster, CD, Martin, M, Brink, J et al (2004) START: Short-Term Assessment of Risk and Treatability. British Columbia Forensic Psychiatric Services Commission.Google Scholar
Webster, CD, Hucker, SJ (2007) Violence Risk Assessment and Management. John Wiley & Sons.Google Scholar
Weerasekera, P (1993) Formulation: A multiperspective model. Canadian Journal of Psychiatry 38: 351–8.Google Scholar
Wittenbrink, B, Judd, CM, Park, B (1997) Evidence for racial prejudice at the implicit level and its relationship with questionnaire measures. Journal of Personality and Social Psychology 72: 262–74.Google Scholar
Figure 0

TABLE 1 Characteristics of ‘intuitive’ and ‘deliberative’ thinking processes

Figure 1

FIG 1 The interpersonal circumplex (after Kiesler 1997).

Figure 2

TABLE 2 Simplified cost–benefit matrix for deciding whether to compulsorily admit a man with schizophrenia

Submit a response

eLetters

No eLetters have been published for this article.