From Myths to Realities to Progress: How to Improve the Quality of Analysts in the Private Intel/Political Risk Industries

I’ve spent many weeks identifying the myths prevalent in the field of private intelligence and political risk (many of which the industry has actively perpetuated) and the realities of working in these fields over the last 5 years. Although I haven’t covered all of the myths yet, I do think it’s time for a more optimistic post, one that focuses on how to improve the political risk industry. Not surprisingly, I’d like to start with the people- the analysts and consultants and project managers who make political risk firms function. We can only fix many of the industry’s issues by hiring, training and enabling the right people. We’re failing miserably at this. First, we need to correctly identify the skills necessary to excel in the field. Only then can we find people with skills to match those needs. Obviously, there are a few basic skills: research skills, analytical skills, oral and written communication skills, familiarity with the Microsoft Office Suite, as well as the ability to collaborate, multi-task and work in fast paced environment, while paying attention to detail. This probably sounds like countless job postings you’ve seen in just about any industry, and that’s exactly the problem.

Step 1: Write better job descriptions, and know what skills are truly needed.

Private intelligence and political risk are not just like any other industry, and they require a very particular set of skills. Ideally, they’re polymaths, or at least philomaths. In addition to all of the skills mentioned above, intelligence analysts and political risk consultants need to possess curiosity, know how to ask the right questions (and have the persistence to find the right answer), have a vibrant imagination, and be able to absorb, comprehend, analyze and apply vast amounts of information very quickly. This includes being able to read maps, remember the names of leaders, understand statistics and be familiar with a wide spectrum of cultures, as well as having a nuanced understanding of business processes and how they are affected by political risks. Good analysts and consultants have to be able to think several steps ahead, to see connections, causes and consequences of events.

Step 2: Establish more effective interview processes.

Right now, the hiring process in the fields of private intel and political risk basically consists of a couple rounds of interviews where candidates are asked basic questions (What’s your greatest strength? Biggest weakness?) and a writing test, and occasionally case study analysis. I don’t really care if someone’s biggest weakness is that they have trouble saying “no” to people. I can teach them to prioritize tasks and feel empowered to tell managers when they’re overloaded with work. I can’t teach people to be naturally inquisitive or very creative. I want to see someone define political and geopolitical risk, and to give a short presentation with analysis of a recent international development. I want to watch them think on their feet as they tell me why it’s important, whom it’ll affect and 3 scenarios for where things can go in the future. I want to hear their ideas for how a fictional client could navigate the situation to minimize disruptions to their business. I want to know what products they would offer the fictional client to keep them from running into the same problem or keep them better informed of their risk environment. Not only would such a trial by fire tell me whether the candidate possesses all the skills I value, but it would also weed out lots of candidates who seem good on paper, but would likely be disappointing analysts and consultants. I don’t want to watch them flounder for three months, when a 15-minute presentation and a conversation can minimize the torture and lost time for both parties.

Step 3: Stop hiring typical candidates and expand the pool of possible candidates by targeting smart people early. The types of people being hired by these firms can generally be grouped into 4 categories:

1)   People who’ve spent 3 months to 3 years at a government agency. Why? Their alphabet soup government agency experience lends credence to analytical teams. “We have analysts who’ve spent time in the FBI/CIA/DIA! We must know what we’re doing.” The problem? These people didn't spend enough time there to really become immersed and succeed in their position, since they left, likely signifying that they weren’t good enough to get promoted. Or, they spent long enough there to get burned out because they were so frustrated with bureaucracy and now just want a position where they can coast and “re-charge.” Neither makes for a stellar analyst. They also lack business expertise. 

2)   People who’ve previously worked at a think tank. Why? These people know how to research and write but they’re used to spending months doing research and writing dense reports, not quick turn two-pagers outlining the major threats to shipping firms in Asia. They usually have good networks- that are full of other think tankers. There’s nothing wrong with these people at first blush, but they often lack any business experience and therefore could explain the finer points of the Yemen conflict but couldn’t tell you why it affects a firm’s supply chain. I’d prefer someone who worked in a small logistics company, who likely got to understand the finer workings of business, over someone who’s only ever done research and never experienced how a business functions.

3)   People with a Masters degree and no real world experience. Why? A Masters degree is quickly becoming a pre-requisite for any decent paying job in political risk. As someone who went from undergrad straight to Georgetown’s SFS, I heartily endorse getting that MA ASAP, but only if you're sure of your path and have decent professional experience from your time in college. I worked in business, public relations, on campaigns, etc., throughout my time in undergrad, building an international network of contacts. Those who get a Masters degree with virtually no experience beyond a couple think tank internships are unlikely to have a vibrant network. When you’re being vetted by a private intel or political risk firm, the strength of your network becomes one of your biggest assets.

4)   People who are straight out of undergrad and speak a language. Why? They speak a language! I love eager students straight out of undergrad. They’re usually energetic, curious, and good with technology. Most often, they’re hired because they studied a necessary language. The problem is that taking 4 years of a language makes a person a good walking interactive dictionary, but not necessarily a good analyst. I can always find someone in my network to translate something for me. What I usually need is someone who also knows the culture, particularly the business culture, of the country where that language is spoken.  This leads me to my next step.

Step 4: Do a better job of marketing the industry as a good career option.

Study abroad programs are increasingly popular and students are going abroad to a range of fascinating countries where they have the potential to strengthen language skills, learn a new culture and see the world from a different perspective. The industry needs to start getting on the radar of sophomores, who are choosing majors and planning their junior year experiences, and letting them know the private intelligence and political risk are great career paths. By reaching them early, we can help students better focus their study abroad programs to suit their strengths, take advantage of all that the experience has to offer, including interning in a foreign firm, and equip them with the foresight to know which knowledge and insight firms will value in candidates. If we can convince these students to engage in international affairs in addition to international cocktail parties, we can have shape a new generation of analysts who make the most of their time abroad and come back with knowledge, perspectives and a level of comfort being abroad that could greatly boost the value of our analytical teams.

Step 5: Institute interactive training programs.

Ah, orientation. The weeklong torture, I mean, "training process," usually includes boring, endless lectures directed at the recently hired cohort. They throw in everything from security and sign-in procedures to how to access files to how to fill out templates and even follow style guides. We’ve all been to classes where it makes sense when the teacher says it, but we get into trouble when doing it on our own. Why must we lock these poor shmucks, I mean analysts, into windowless rooms and drone on about dozens of things that they likely won’t absorb? Training should be an interactive process, not a passive listening experience. We should have them try to use the online platform, go to a mock client meeting or interview with a subject matter expert, design and deliver a mock presentation or draft a mock project. We need to give them feedback and point out oversights, and let them try again and watch them improve. We need to give these people an active role in their training. Yes, this takes time and effort, but the results are always better because they’re actually training to do something, running into obstacles and learning ways to address them while adopting company procedures. 

Of course, instituting these steps will only yield good results if a company is well managed and provides analysts with adequate work to challenge them and allow them to grow. A bad firm won't find any magic fixes in implementing these steps. Now that we’ve covered how to make sure we have the right people to do good analysis, I’ll discuss the ways in which we can improve our methodologies in the next post. 

Myths Vs. Realities: The Client-Analyst Relationship... It Takes Two to Tango

This is the latest in a series of posts on myths vs. realities in the world of private intelligence and geopolitical risk. If you're new to the site, you may want to start with the Glossary. To see previous posts in this series, simply scroll down on this page. 

Based on some of your feedback to the last post, I want to address some of the myths surrounding the client-analyst relationship. It’s often taken for granted that a client has a direct line to an analyst, and can reach them at any time for an answer to a complex question. This is very rarely the case. The client-analyst relationship is often far less smooth, yet as I’ve mentioned previously, a client posing a question to an analyst is what sets the private intel cycle in motion. The nature of the client-analyst relationship can help predict just how smoothly the cycle will unfold.

Once an analyst receives the question, they employ a specific methodology to provide the answer. An answer can only be as good as the question, so it takes two to tango. Much of the time, bad questions lead to bad answers, and tense relationships. Unfortunately, in most cases, clients aren’t leading the tango, and analysts have to try to step in to direct, often stepping on toes as they try to navigate the tense line between needing more information to respond to a client’s needs while not wanting to nag. There are several reasons why the client-analyst relationship is often dysfunctional:  

Clients don’t know how to ask questions

In general, the more robust and detailed the question, the more complex the methodology, and the richer the answer. The more challenging a request, the more sources an analyst will use, the more experts they’ll reach out to, and the more detailed an answer they’ll produce. Conversely, the more general or simple the question, the weaker the methodology, and the more unsatisfying the answer. An analyst won’t work as hard to answer a basic question, especially when he or she already suspects the answer, and clients rarely ask how the answer is derived. There no question here that a client will receive an answer that’s only as good as the question he or she asked. Unfortunately, many clients don’t know how to ask the right question, or how to pose their request in the best way to ensure that they’ll receive a useful answer. 

Which question is likely to lead to a useful answer: What’s happening in Yemen? Or… How will the ongoing political upheaval and complex security environment in Yemen affect port operations in Aden? An analyst can answer the first question with a paragraph, but that paragraph’s worth of information is very unlikely to help a client determine whether his operations are threatened or what steps might be prudential in avoiding disruptions or mitigating threats to staff and cargo. The second, question, however, is more likely to lead to a robust answer, including a general overview of the situation in Yemen, as well as specific information that will inform a client’s decisions regarding how to protect operations, build a relationship with a new governing authority, safeguard personnel and institute protocols designed to minimize losses.  But to ask the right question, a client needs to know what information they need, which is driven by the purpose the intelligence will serve. That brings us to... 

Clients don’t always want thorough analysis- just confirmation

In some cases, clients simply aren’t interested in getting thorough answers, because they’ve already made up their minds and they simply want their decisions validated (I touched on this in the post on bias). These clients are typically the type who see geopolitical risk as having little value, and therefore don’t integrate geopolitical risk analysis into their organic decision making process. To them, it’s an afterthought at best, and a compliance necessity at worst. In such cases, clients simply want to check a box that says they sought another opinion, largely for the purposes of mitigating liability. In other cases, a client doesn't understand the many purposes of intelligence or geopolitical risk analysis, and asks basic questions because that's all they believe such a firm can answer.

Similarly, if the analyst isn't sure about the purpose her analysis will serve, she will err on the side of caution. If an analyst feels that a question is asked simply to obtain confirmation, there is less of an impetus to introduce information that could criticize a client’s decision, since an answer that upsets the client may lead them to end the relationship with the intel/risk firm. This is detrimental to both the client, who may be making a very big - and avoidable - mistake, and the analyst, who violates tenets of intellectual honesty to tell the client what they want to hear.

Sales/Marketing staffs as a third wheel

Salespeople, who seek out clients and connect them with analysts, drive most new business. Few analysts go out and establish new business relationships. So analysts, who are best positioned to speak to their capabilities and skills, aren’t the ones speaking to clients about their needs. Instead, a salesperson may pitch a client exploring overseas expansion on a product that will help them decide which new market to enter. Unfortunately, sales people are not well versed in intelligence, geopolitics or even business operations in a particular sector, and don’t know exactly what information they need to ask for to pass to the analyst. Similarly, the sales people also don’t know whether an analyst is fluent in the language of the country to which the client is considering an expansion, or whether the analyst has a good local source network there to help them gather the information needed to answer the client’s question. As the intermediary, who often turns into an awkward third wheel, the sales person typically turns the communications between client and analyst into a game of telephone. Much of the meaning gets lost, and the intermediary gets in the way of a productive relationship, as a sales person seeks to wrap up billable projects, rather than provide maximum value to the client.

Analysts often need to seek clarification, and if they fear bothering a salesperson to do so, or don’t trust a salesperson to get the right information, they’ve more likely to complete a project without the necessary information, resulting in a product that’s all but useless. A bad experience can lead a client to stop using the firm. On the other hand, a knowledgeable salesperson (who knows how to introduce the client to the analyst and then let them develop a relationship) can do wonders, leading to a productive relationship between an analyst. Once they’ve established a rapport with a client, an analyst can better anticipate needs, save their clients money and become an invaluable part of the decision making process.

If clients are serious about deriving value from intelligence and geopolitical risk analysis, then they should understand that they are buying access to methodology and a relationship with an analyst, not an answer to a question in the form of a PDF. By being an equal partner in the tango, they can ensure that they know the right questions to ask, understand the analyst's strengths/weaknesses, and have a good idea of the methodology used to obtain information.

Now that we’ve covered the major myths, check back next Tuesday for the first in a series of posts with ideas for fixing some of these problems, improving the industry and boosting its value to clients. 

Myths vs. Realities: Bias and Groupthink in Private Intel/Geopolitical Risk

This is the third part in the Myths vs. Realities series. If you’re new to the site, you may want to start with the Glossary. To catch up on previous posts in this series, scroll down- they’re all on this page.  

In the previous two parts, I’ve examined some similarities and differences between the US Intelligence Community and the way that private intelligence firms or geopolitical risk firms function. The issues of bias and groupthink are prevalent in both- I don’t claim that the IC is any better at combatting both problems effectively. However, despite wider flexibility in hiring practices, the problems of bias and groupthink are even more acute in private intel firms. The following are five types of bias/groupthink that I’ve observed during my five years in this field:

Selection-driven Groupthink

For the IC, the security clearance process can substantially limit the pool of applicants that it can consider, necessarily eliminating citizens of other nations, US citizens with extensive time overseas, naturalized US citizens with certain backgrounds, etc., etc. This is a security issue, and a reality that the IC cannot change. In the private sector, however, there is significantly greater flexibility in terms of the types of people who can be hired. Some of the very backgrounds that may prevent an individual from obtaining a security clearance in the IC could in fact provide invaluable expertise at a private firm. This includes close familiarity with other cultures, native language expertise, and wide ranging business connections or personal contacts. And yet, most private intel firms are full of analysts from the same few top tier universities, with the same think tanks internships, generic study abroad experiences, and basket of intermediate language skills. These people had many of the same professors, read the same books, and wrote the same types of theses. They all got together at the university bar to bemoan becoming jaded cynics who are never surprised by anything. Now they read the same websites and books, and fantasize about the same PhD programs, while secretly hoping they’ll one day be Secretary of State. I don't mean to suggest that there's anything wrong with top tier university graduates who become analysts, but when a company tends to hire from the same three schools, they shouldn't be surprised that over time, groupthink becomes a major issue. 

It’s no wonder then that these very similar people think the same way, and arrive at the same conclusions, and confirm each other’s analytical assessments (on the rare occasions when they bother to discuss them). Diversity among analysts is always good, but it’s also not a magical solution. Even a diverse group could produce uninspired analysis if a single analyst - who never engages with others to debate hypotheses, explore possibilities or consider alternative perspectives- writes projects. And in fact, most projects are the result of an analyst working solo, thereby exacerbating this groupthink issue, and leading to the next two types of bias, which are inextricably linked. One leads to the other, and both stem from the lack of a division of labor between collection and analysis.

Collection Bias

As I mentioned previously, the IC divides collectors and analysts into two separate groups, and the skills needed to excel in each job vary greatly. In most private intel firms, the same person typically collects and analyzes the information, interviews subject matter experts, fact checks their own research, and writes the assessment. That’s a lot for one person, and reminds me of Charleston Tucker’s improbable character on State of Affairs. She runs covert ops, shoots people, breaks people out of custody, oh and by the way, puts together the Presidential Daily Brief and actually briefs the President. When someone is a one-man private intel firm, they can’t possibly execute every single function well.

The quality of the analysis is determined then not just by the analyst’s analytical prowess, but also by the quality of their research skills. Usually, an analyst confirms a piece of information in two other sources at most, rarely checking to see if the two sources are truly different, and deems the information correct. For some things, two sources are enough, but for others, two sources are inadequate, and could significantly skew the analytical conclusion. Given short deadlines, the analyst will typically only look for just enough information needed to make their assessment, and does not do extra work to confirm, challenge their own preconceived notions, or consider the implications of whether a certain piece of information is driven by an agenda and therefore inaccurate or useless for making decisions.

Confirmation Bias

When the information is weak, the intelligence derived from it is weak. When we simply try to confirm our preconceived notions, that’s not good analysis.  Lots of us look at the world through a set of lenses that we developed during our college and graduate school years. Some of us are realists, others liberals or constructivists. Those paradigms help us interpret the world we see around us, because we prioritize certain elements of connections, dynamics and conflicts over others. These lenses lead us to certain suspicions or theories about what explains international developments and how to interpret pieces of new information. Our specialties also affect our analyses. With regards to the recent drop in oil prices, some saw the Saudis using OPEC as an offensive weapon to hurt Russia, Iran (and Syria), while others suspected that the Saudis were taking a defensive posture, to protect an already eroded market share as US shale production increases. Both can be true. Both may be true. But if an analyst’s first thought is that Saudi Arabia is gaining power in the region, they may only search for information to confirm that suspicion, and ignore the market share question, which may in fact be the greater of the two motivations, or of greater interest to a client. If the analyst is also the collector, they’re casting a much narrower net for information. If the functions were split, a collector would cast a wide net, and get as much information as possible, to pass on to the analyst to reconcile the disparate facts, forcing the analyst to consider many competing explanations for what may be happening.

I should note here, that clients are part of the problem in this particular type of bias. Many already have a preconceived notion that they’re trying to confirm because they’ve already made plans to take a certain action and they want to reassure themselves that they’re right to do so.  

One of my biggest frustrations is that geopolitical risk analysis is not truly a part of clients’ decision-making process. It’s frequently an afterthought. If it were part of the organic decision-making process, geopolitical risk analysis would be used before a decision is made, not to rubberstamp existing plans. To put this in simple terms: If you want to buy a laptop, you first read a variety online reviews from respected tech sites, then go to the store to check out the top contenders, and then comparison shop online to find the best price or buy the best laptop at the store. The way geopolitical risk analysis is used now, is akin to first ordering the laptop you think is best online, then going to read enough online reviews - of only that particular laptop - to convince yourself you made the right choice. Seems a bit backwards, right? 

Caution-driven Bias

This is one type of bias that’s very difficult to counteract. In the industry, there’s a legal consideration at play. Private intel firms never want to put a client in harm’s way or expose them to risky situations. Hence, analysis will always err on the side of caution. This frequently makes identifying and mitigating risks a little easier. But then again, it also doesn’t take much expertise to warn someone not to go wondering around at night. It also certainly gets in the way or identifying and exploiting opportunities, all of which involve some degree of risk. It’s easy and logical to tell someone that they should stay in their well guarded, five star hotel. But if they need to go out to a building site, or visit a well-trafficked area, it takes a greater degree of security expertise to determine what is and isn’t safe, and what specific precautions may be necessary. Analyst X, fresh out of undergrad, likely doesn’t have that expertise, and so they’ll just give a generic set of common sense recommendations. Is that worth the money being charged for them? And if the people writing the recommendations aren’t trained to make good ones, why should those recommended be trusted?

Template-driven Bias

Finally, we come to template driven bias. Most products created by private intel and geopolitical risk firms are based on templates that follow a standardized outline. This creates consistency across products and ensures that analysts include a comprehensive overview of the subject they’re analyzing. Mostly, though, templates allow products to scale- swap out a couple details, and the product can be resold. The motivation here is profit, not intellectual rigor. In theory, this is great. Every product in a series will look the same, and provide the same information, allowing for comparison and better decision-making. In reality, templates beat original thought out of analysts, forcing them to all adhere to the same dry, BLUF-driven short paragraphs, where only the names are different. If one’s eyes glaze over filling one out, you can only imagine what a thrill it must be to read a dozen of them. But the real problem with templates is that they limit analyst initiative. They don’t inspire further inquiry, and if something isn’t asked for on the template, even if it may be relevant, an analyst is unlikely to take the time to hunt it down. Fill-in-the-blank style analysis doesn’t benefit anyone, least of all the analyst. Standardized processes and products can be beneficial; standardized thinking cannot ever be beneficial.

The bias and groupthink prevalent in private intel and geopolitical risk firms present a big problem that are not likely to be solved in the near term if these companies proceed apace. It’s not a theoretical problem. It’s a real problem, that’s having real consequences for the direction the industry is taking. Check back next Tuesday for a discussion of methodologies, and my ideas for how they can be improved to boost the value that private intel/geopolitical risk firms can provide. 

Myths vs. Realities: How Private Intel Firms Perpetuate Myths

This is the second in a series of posts on myths vs. realities in the world of private intelligence and geopolitical risk. If you're new to the site, you may want to start with the Glossary and the first post in this series, on the Private Intel Cycle (scroll down). 

In the previous post, I discussed some of the many myths regarding private intelligence firms' capabilities. It is my firm belief that many of these myths prevent the industry from being understood by clients, and keep good analysts from getting the resources and skills to do valuable, useful work. To a great extent, these private intel firms actively perpetuate many of these myths, giving prospective clients the impression that they're capable of doing more than they can and are actually doing in the production of the "deliverable” or what’s often just a PDF worth several thousand dollars. Given that many clients understand the intelligence business through the lens of James Bond, 24 and Homeland, they can be easy to persuade and awe with impressive-sounding spy terms. Some of the ways that firms do this is through their structures, branding, and marketing materials. Let’s look at a few examples of ways that firms perpetuate these myths: 

The structures/marketing materials: You've all seen the impressive, low light photos of countless "Operations Centers," with the semi-circular desks positioned under large screens. There are no windows and everyone is pretty pale and serious-looking. Each person has 2-3 screens, and they're clacking away on their keyboards while wearing headsets. Some have remarked that these centers look like NASA mission control, promoting an image of 24’s CTU- secret people planning secret things, in secret lairs.  Sure, that makes it sound and look impressive, but "operations" is a terrible misnomer for the passive monitoring that goes on in these low lit, often windowless bunkers. They're not planning covert actions there; they're checking secret “sources” using “proprietary methodology.” These "sources" aren't collectors in the field, they're just media like Twitter and CNN, requiring none of the tradecraft that the IC uses to develop and run a source network.  A more accurate name for such a place would be "Monitoring Center," but it sure doesn't sound as awesome.    

The branding: The private intelligence industry loves to cloak its work in jargon, much of it borrowed, appropriately or not, from the military and intelligence community. Everyone at a private intel firm is an “intelligence analyst” even though many of them couldn’t tell you the parts of the intelligence cycle. It’s a handy answer for Washington D.C.'s most popular/infamous question - "So, what do you do?" and the interlocutor is nearly always impressed by the response, especially if delivered sotto voce in a dark corner of a bar. One of the most commonly borrowed terms is "sitreps," or situation reports, which tell the what, who, where, how and why of a development. These sitreps are typically nothing more than a basic description resulting from basic Internet research and passive monitoring, and rarely includes analysis of detailed local field reports, as the name implies. Another commonly misused term is "red team” and DC is seemingly full of them. The term originates in adversarial training, intended to describe a unit trained to think from the enemy's perspective and operate using their capabilities. In practice, the term "red team" as used in the private intel industry can apply to just about any basic analytic element, as if saying "hey, what do you think President X was thinking when s/he did this?” counts as analyzing a situation from another perspective. Surprise! It doesn't. Red teaming requires training, not just the ability to pretend you know what your enemy is thinking. What many of these teams do is more akin to writing fan fiction. But of course "red team" sounds more spy-tastic, even if they consistently fail to live up to the name.

Finally, the industry’s original jargon sin is the use of the term "strategy." Everyone is a "strategist," despite routinely betraying the fact that they don't know the difference between tactics and strategy. A bombing is not strategic- a bombing is a tactic used to advance a strategy (inciting terror), which is designed to achieve an objective (undermining the government, etc.). Here’s a tip: If a company doesn't have "strategy" or "strategic" in the name, it likely does better strategic consulting than those who do. It's like the Democratic People's Republic- neither democratic, nor a republic. 

The point I’m trying to make here is that private intel firms are doing a disservice to themselves and their clients by perpetuating myths which skew the work they do. They keep clients from better understanding the value private intelligence analysts could provide, if given the adequate training, skills and resources necessary for meeting client needs. The private intelligence industry fills an important and unique need for clients, which deserves to be recognized and understood. Its value it not in being watered-down and only just IC-esque, with borrowed marketing, branding and jargon. Ultimately, private intel firms succeed by understanding client needs and filling them, not by masquerading as private spy firms, then being upset that clients don't understand or value their work. I'll share some of my ideas on how to accomplish this in a future post. 

Check back next Tuesday for a post on bias and groupthink in the private intelligence industry.