Monday, July 16, 2018

Farengar Secret-Fire Has A Quest For You! Or What Video Games Can Teach Us About Virtual Intel Requirements

A couple of weeks ago, I wrote a post about the 3 Things You Must Know Before You Discuss Intelligence Requirements With A Decisionmaker.  That post was designed for intel professionals who have the luxury of being able to sit down with the decisionmakers they support and have a conversation with them about what it is they want from their intelligence unit.  I also stated that doing this in a virtual environment or on an automated requirements management system like COLISEUM was both more difficult and something I would discuss in the future.

Well, today is your lucky day!  The future is here!


When I think about who does requirements best in a virtual environment, I think about video games.  Particularly, I think about massively multi-player online role-playing games (MMORPGs for short).  Very, very specifically, I think about the questing systems that are standard fare in virtually all of these types of games.  

Quests are the requirements statements of games.  I have included an example of a quest below (it is from a game called Skyrim which is awesome and highly recommended).  These quests differ from intel requirements in that almost all of them are operationally focused (What do we need to do to accomplish our mission?) instead of intelligence focused (What do we need to know about the other guy to accomplish our mission?).  That said, there are still a number of things we can learn from a well-formulated quest that will make intel requirements in a virtual environment easier to craft and to understand.

Retrieved from Immersive Questing
Video game designers know they have to get quests right the first time.  They don't have an opportunity to talk to the players outside the game so all the necessary information needs to be in the quest itself.  On the other hand, they have to make the quest seem realistic.  Failing to maintain this balance runs the risk of creating an unplayable game.  As a result, video game designers have developed a number of conventions that allow the quests to sound real but be complete.  Intel has the "real" part down so it is about making sure that it is complete that matters.  In this respect, the final version of a good virtual intel requirement bears a remarkable resemblance to the final version of a good quest.  Here are the specifics:

  • They both provide background.  Why am I doing this?  What is the context for this quest?  In video games, putting the quest in context allows the story to unfold.  In intelligence work, this context allows the intelligence professional to better understand the decisionmaker's intent.  This, in turn, allows the intelligence professional to have a better understanding of the kinds of information and estimates that will prove most useful.
  • They both define terms.  In the quest above, I am to look for a Dragonstone.  What is a Dragonstone?  The quest defines that for me.  In intelligence work, agreeing on definitions of terms (particularly common terms) is incredibly helpful.  For example, you get a request to do a stability study on Ghana.  What term needs to be defined before you go ahead?  Stability.  We do this exercise every year in our intro classes.  There are multiple definitions of stability out there.  Which one is most appropriate for this decisionmaker is a critical question to ask and answer before proceeding.
  • They both use terms consistently.  If I encounter another quest asking for me to find a Dragonstone, I can count on it being the same thing I am looking for in this quest.  Likewise, in an intelligence requirement, if I define a term in a certain way in one place, I will use that term - not what I think is a synonym, no matter how reasonable it sounds to me - consistently throughout the requirement.  
  • They both often come in standard formats.  All video game players are familiar with a variety of standard quest formats such as the Fetch Quest (like the one above) where the task is to go, get something, and bring it back.  Intelligence requirements also come in more-or-less standard forms such as requests for descriptive or estimative intelligence.  Categorizing requests for intelligence and then studying them for similarities should allow an intelligence unit to develop a list of useful questions to ask based simply on the type of request it is.   

Requirements statements, whether managed in person or virtually, are almost always going to start out messy.  Without the advantage of a back-and-forth, personal conversation, the virtual requirements process has a greater potential, however, for breakdown.  Thinking of the requirement as a quest allows intelligence professionals to re-frame the process and focus on the essential elements of the requirement and, perhaps,  anticipate and address predictable points of potential failure in advance.

Look for the final part of this series later this summer when I talk about all the things you need to think about in the middle of requirements discussion with a decisionmaker!

Tuesday, July 10, 2018

How To Write A Mindnumbingly Dogmatic (But Surprisingly Effective) Estimate

At the top end of the analytic art sits the estimate.  While it is often useful to describe, explain, classify or even discuss a topic, what, as Sun Tzu would say, "enables the wise sovereign and the good general to strike and conquer, and achieve things beyond the reach of ordinary men, is foreknowledge."  Knowing what is likely (or unlikely) to happen is much more useful when creating a plan than only knowing what is happening.

How to Write a Mindnumbingly Dogmatic (but Surprisingly Effective) Estimate (Outline)
Estimates are like pizza, though.  There are many different ways to make them and many of those ways are good.  However, with our young analysts, just starting out in the Mercyhurst program, we try to teach them one good, solid, never fail way to write an estimate.  You can sort of think of it as the pepperoni pizza of estimates.

Here's the formula:

  • Good WEP +
  • Nuance +
  • Due to's +
  • Despite's +
  • Statement of AC = 
  • Good estimate!
I'm going to spend the next couple of posts breaking this down.  Let's start with what makes a good Word of Estimative Probability - a WEP.   Note:  Linguistic experts call these Verbal Probability Expressions and if you want to dive into the literature - and there's a lot - you should use this phrase to search for it.  

WEPs should first be distinguished from words of certainty.  Words of certainty, such as "will" and "won't" typically don't belong in intelligence estimates.  These words presume that the analyst has seen the future and can speak with absolute conviction about it.  Until the aliens get back with the crystal balls they promised us after Roswell, it's best if analysts avoid words of certainty in their estimates.


Notice I also said "good" WEPs, though.  A good WEP is one that effectively communicates a range of probabilities and a bad WEP is one that doesn't.  Examples?  Sure!  Bad WEPs are easy to spot:  "Possibly", "could", and "might" are all bad WEPs.  They communicate ranges of probability so broad that they are useless in decisionmaking.  They usually only serve to add uncertainty rather than reduce it in the minds of decisionmakers.  You can test this yourself.  Construct an estimate using "possible" such as "It is possible that Turkey will invade Syria this year."  Then ask people to rank the likelihood of this statement on a scale of 1-100.  Ask enough people and you will get everything from 1 TO 100.  This is a bad WEP.


Good WEPs are generally interpreted by listeners to refer to a bounded range of probabilities.  Take the WEP "remote" for example.  If I said "There is a remote chance that Turkey will invade Syria this year" we might argue if that means there is a 5% chance or a 10% chance but no one would argue that this means that there is a 90% chance of such an invasion.



The Kesselman List
Can we kick this whole WEP thing up a notch?  Yes, we can.  It turns out that there are not only "good" WEPs but there are "best" WEPs.  That is, there are some good WEPs that communicate ranges of probabilities better than others.  Here at Mercyhurst, we use the Kesselman List (see above).  Alumna Rachel Kesselman wrote her thesis on this topic a million years ago (approx.).  She read all of the literature then available and came up with a list of words, based on that literature, that were most well defined (i.e. had the tightest range of probabilities).  The US National Security Community has its own list but we like Rachel's better.  I have written about this elsewhere and you can even read Rachel's thesis and judge for yourself.  We think the Kesselman List has better evidence to support it.  That's why we use it.  We're just that way.

Before I finish, let me say a word about numbers.  It is entirely reasonable and, in fact, may well be preferable, to use numbers to communicate a range of probabilities rather than words.  In some respects this is just another way to make pizza, particularly when compared to using a list where words are explicitly tied to a numerical range of probabilities.  Why then, do I consider it the current best practice to use words?  There are four reasons:

  • Tradition.  This is the way the US National Security Community does it.  While we don't ignore theory, the Mercyhurst program is an applied program.  It seems to make sense, then, to start here but to teach the alternatives as well.  That is what we do.  
  • Anchoring bias.  Numbers have a powerful place in our minds.  As soon as you start linking notoriously squishy intelligence estimates to numbers you run the risk of triggering this bias.  Of course, using notoriously squishy words (like "possible") runs the risk of no one really knowing what you mean.  Again, a rational middle ground seems to lie in a structured list of words clearly associated with numerical ranges.
  • Cost of increasing accuracy vs the benefit of increasing accuracy.  How long would you be willing to listen to two smart analysts argue over whether something had an 81% or an 83% chance of happening?  Imagine that the issue under discussion is really important to you.  How long?  What if it were 79% vs 83%?  57% vs 83%?  35% vs 83%?  It probably depends on what "really important" means to you and how much time you have.  The truth is, though, that wringing that last little bit of uncertainty out of an issue is what typically costs the most and it is entirely possible that the cost of doing so vastly exceeds the potential benefit.  This is particularly true in intelligence questions where the margin of error is likely large and, to the extent that the answers depend on the intentions of the actors,  fundamentally irreducible.  
  • Buy-in.  Using words, even well defined words, is what is known as a "coarse grading" system.  We are surrounded with these systems.  Our traditional, A, B, C, D, F grading system used by most US schools is a coarse grading system as is our use of pass/fail on things like the driver's license test.  I have just begun to dig into the literature on coarse grading but one of the more interesting things I have found is that it seems to encourage buy-in.  We may not be able to agree on whether it is 81% or 83% as in the previous example, but we can both agree it is "highly likely" and move on.  This seems particularly important in the context of intelligence as a decision-support activity where the entire team (not just the analysts) have to take some form of action based on the estimate.  
I'll talk about the rest of the "formula" later in the summer!

Monday, July 2, 2018

3 Things You Must Know Before You Discuss Intelligence Requirements With A Decisionmaker

One of the most important tasks of virtually all intelligence professionals is the ability to sit down with their organization's decisionmakers and get meaningful intelligence requirements from them.  Getting requirements that are too vague or poorly designed make the intelligence professional's life more difficult.  More importantly, bad requirements often lead to analysis that fails to meet the decisionmaker's real needs and can, in turn, lead to the organization's failure.

All that makes perfect sense, right?  Getting a good answer to a question implies that the question is clear, that I understand the question and that I have the ability to answer it.  If I ask you, "What time is the movie?" then you are well within your rights to ask me, "Which movie?"  Good requirements emerge from a conversation; they aren't dictated through a megaphone.


Outline of this post (Trying something new here.  Let me know in the comments if you like it!)

Having this kind of requirements discussion is much more difficult in the context of intelligence, however, and not only because the questions are usually much more complicated.  There are a number of reasons for these challenges:
  • Chain of command.  Typically, intel officers work for the decisionmaker.  Even with the best of DM's, there is often a real reticence to poke at the requirement, to make suggestions about how to make it better or to question whether it is worth addressing at all.  While it is true that pushing the DM for clarity on his/her requirements statements is "just part of the job", it does not make the situation any less challenging. 
  • Lack of understanding about intel.  Most decisionmakers rise up through operational channels.  This means that decisionmakers are usually much more comfortable with operational questions (I.E. What are we going to do with the resources under our control?) than with intelligence questions (I.E. What is happening that is critical to our success or failure but outside of our control?).  Even in the national security realm, where the intelligence function is typically much better understood than in law enforcement or corporations, there is often a lack of understanding or even a misunderstanding of how intelligence supports the organization's decisionmaking process.  
  • Ops/Intel Conflation.  While there are good reasons to keep many operational discussions and intelligence discussions separate, that is not the way the decisionmaker is likely to think.  Responsible for integrating intelligence analysis with operational capabilities and constraints, decisionmakers are likely to conflate the two as they talk about requirements.  It is up to intelligence professionals to untangle them in such a way that they have a clear statement of their requirements. 
  • Lack of decisionmaker clarity.  Decisionmakers don't know what they don't know and good decisionmakers worry about that - a lot.  Even when decisionmakers fully understand intel, it is possible for them to have only a vague notion of what they want or need.  Particularly with strategic-level concerns, good DMs will be constantly asking themselves, "What questions should I be asking right now?" and worrying about wasting time and energy chasing an irrelevant question down a rabbit hole.
With this as background there are three essential questions that intelligence professionals should ask and answer before they begin a discussion about requirements:

  1. What does the organization do?  At first glance this seems ridiculous.  How could you work for an organization and not know what it does?  You'd be surprised.  Even small organizations often appear to do one thing but actually spend much of their time or make most of their money doing something entirely different.  When I was younger, for example, I worked for a company called Hargrove's Office Supplies.  You would be excused for thinking that we made our money selling office supplies.  In fact, Hargrove's made most of its money in those days selling and servicing business machines - a very different kind of business.  This problem becomes much more acute in large organizations with many moving parts.  It is worth the intelligence professional's time to get to know the organization it is supporting in some detail - everything from strategic plans to tactical practices.  While the intelligence professional will never be as knowledgeable as the operators running the organization, the more intel professionals knows about the goals and purposes of an organization, the more productive the requirements process will be.
  2. What is the current strategy and situation of the organization?  If the first question is what does the organization do, then the second question should be "How does it do it?"  All organizations have a strategy (even if it is only an implicit one) and it is worth it to take time to consider what that strategy might be.  It is also worth thinking about the current situation in which the organization finds itself.  Is the organization winning or losing?  Successful and growing or failing and losing ground against its competitors? While the situation of the organization should not matter in terms of the analysis - it is what it is - understanding how an organization is doing helps understand where a requirement is coming from and gives insight into how to focus the answer.
  3. Who is the decisionmaker?  This is another simple question with a complicated answer.  It is tempting to believe that the person or organization asking the question is the one who wants the answer.  That is not always the case.  Oftentimes, the real decisionmaker is one or more levels removed from the person asking the question of the intelligence unit.  In this case, it makes sense for the intelligence professionals to ask themselves what the the real decisionmaker wants.  In the accelerating pace of the intel world, it is entirely possible that the requirement has gone through an elaborate version of the kid's game Telephone and now bears no relationship to what the real decisionmaker wants.  Even if it does, it is still worth thinking about the kind of answer that will meet the needs of not only the gatekeeper but also of the decisionmaker behind the gate.  Finally, even if there is no gatekeeper, it is worth thinking about others who might not have asked the question but will be able to see the answer.  Almost nothing get done in a vacuum.  Even the most siloed of programs often have multiple members with different intelligence needs.  It is important, therefore, to consider who these second and third level audiences might be before crafting the requirement in order to provide clarity and prevent confusion and mission creep.
All this advice is great for when intel professionals have the luxury of actually meeting with the decisionmakers they support.  How do you deal with a situation that is entirely virtual or managed through an automated requirements management system like COLISEUM?  Don't worry, we will get to all of that later in the summer!

Thursday, June 21, 2018

What Do You Want In A Cyber Self Defense Course?

Your company, agency, whatever has hired an intern from the Mercyhurst intel program that has just completed their freshman year.  What do you want them to know about cyber?


That is one of the questions I will be wrestling with this summer.  I am teaching a new course in the fall called "Cyber Self Defense".  Nobody told me I had to teach this course.  Nope!  I volunteered (!) to teach this course.

You see, we have consistently noted that many of our first year students come to us with a pretty poor understanding of cyber related risks and how to minimize them.  The intent of this course is not to turn them all into white hat hackers.  All I really hope to do in the time I have is to make them into knowledgeable users.   
Its like the old joke about the two guys and the bear.  The first guys says, We will never outrun that bear!"  And the second guy goes, "I don't have to outrun the bear.  I just have to outrun you!"  I want to create users that can, at least, outrun the other guy.
We wanted to teach this class at the Freshman level because that is where we think it would be most useful.  It gives the students 3 more years to increase or at least use these skills and an educated user base will only help our own network become more secure.  If this first class goes well, I think I would recommend that it become a requirement for all intel students.

As the obvious wonderfulness of this offering became increasingly apparent, the question naturally arose, "Who will teach this magical, extraordinary course?"  Those of you of a certain age will remember the old Life cereal commercial lovingly preserved by YouTube (above).  Suffice it to say, I get to play the role of "Mikey" in the 2018 remake...

So I throw it out to you, Gentle Readers, what skills would you expect, what abilities would you want to see in that 18 year old intern you just hired for the summer?  I am looking for tools, tips, tricks, websites, sources, absolutely-must-cover topics, don't-waste-your-time topics and everything in between.  Free software and resources will be most appreciated but making students pay to get something that gives a big bang for the buck is also OK.

Here are a few details about the class to help you think through the problem.  It is a MWF class and each class lasts 50 minutes for 15 weeks.  I have access to a computer lab but I think I want the class to mostly be about their own devices - specifically cell phones and laptops (which virtually all students have).  We don't have a standard when it comes to these devices so we will likely have a mix of Apple and Windows, Android and IOS (With Windows and Android machines likely being in the majority).

Here are my initial thoughts:
  • First couple of weeks:  Focus on cleaning up and maintaining their own devices.  My assumption is that at least some of these students will come in with malware or viruses on their system already. Almost all will come in with some sort of factory installed bloatware and I doubt if any of their browser caches have ever been emptied.  The goal here would be to clean all of this up and to teach them how to maintain their devices
  • Next couple of weeks.  Focus on likely attack profiles and how to deal with situations where some sort of hack is more likely (e.g. coffee shops and airports).  Things like phishing and social engineering would get covered here.
  • Mid course.  Focus on privacy.  Talk about how info on the web gets passed around and used.  Talk about how to protect yourself from oversharing and what to do if you do get hacked.
  • Next couple of weeks.  Focus on advanced topics (e.g. Proxy servers, VPNs, Linux, etc).  Should they build their own computer?  
  • Final couple of weeks.  Talk about how to diagnose/help others with problems.  One of the most powerful tests of learning is seeing if the student can transfer their knowledge to new situations.  I want this kind of thing to be part of the final exam somehow.
I want this to be a project based course that gives students lots of hands on with their own devices but also gives them enough conceptual knowledge to be able to integrate new stuff as it comes along. 

I have a bunch of other half formed thoughts but I welcome your input and feedback first.  You can either drop it in the comments below (or in any of the social media where this will be posted) or you can just send me a note at kwheaton at mercyhurst dot edu.

Many thanks, hive mind!  Many thanks!

Monday, June 11, 2018

How To Talk Intel To Trump

This is not a political post.

I know, I know!  It seems almost impossible to make an apolitical statement about the current US president.  Hell, I am going to try - really try - and I am not even sure I can do it.  I have strong feelings about it and writing this post may very well do me in.

It is important to try, though, for two reasons:

  1. Intelligence professionals have long had to work for elected officials they did not like personally, professionally or politically.  It comes with the job.  Moreover, the bulk of the responsibility for figuring out how to make the relationship work falls on the intel professional, not the elected official.  That's not fair but it's true.
  2. I have something new to say about how to communicate intelligence to President Trump that might help.
OK.  Let's get to it.

For the last four years I have been running a project called Quickstarter.  Quickstarter connects students with skills with entrepreneurs without those skills in order to increase the odds of success using crowdfunding sites like Kickstarter.  I can talk all day about this project (and how - insert modest cough here - mindnumbingly successful it has been) but the key professional takeaways all have to do with intelligence support to entrepreneurs.

To build the program, I tapped into my own experience as an entrepreneur, best practices in crowdfunding and, importantly for this post, the growing body of literature in effectual reasoning.  Expert entrepreneurs, as it turns out, don't think causally (That's not a misspelling - I meant "causally").  They think "effectually."  

Dr. Saras Sarasvathy of the Darden School of Business at the University of Virginia did the first research on this idea and a number of other researchers have confirmed, in whole or in part, her results (the best introduction to effectual reasoning is probably her 2010 TEDx talk embedded below).  



If you don't have time to watch the video (and I do suggest you do), she sat down with a bunch of highly successful entrepreneurs and a bunch of corporate, MBA types and presented them with the same problem.  Then she watched (and coded) how they went about solving it.  It turns out the entrepreneurs attacked the problem entirely differently than the corporate guys.  She claimed that the entrepreneurs were practicing effectual reasoning.

What, then, is effectual reasoning?

Well, there is a whole website developed just to explain this (and all the research behind the concept) but it boils down to the difference between these two statements:
  • If I can predict the future, I can control the future. (Causal reasoning)
  • If I can control the future, I don't need to predict it. (Effectual reasoning)
(Note:  Some of you may think you see where this is heading and some of you may already be dismissing it.  I advise both groups to wait a bit before coming to a conclusion.)

Entrepreneurs (highly successful ones anyway) tend to focus on what they can control and how they can use that to move the ball in the general direction of where they want to go.  They don't much care for things like market forecasts or worrying about what their competitors are going to do.  

There is more to effectual reasoning than a worldview that values control more than prediction, of course.  It turns out that highly successful entrepreneurs have four additional principles that they tend to follow as they are thinking through problems:
(Note:  The definitions below are taken more or less intact from the Society for Effectual Action's website but have been lightly edited for length and relevance.)
  • Means (or the Bird-in-hand Principle).  When expert entrepreneurs seek to build a new venture, they start with their means:  Who I am—my traits, tastes, and abilities; what I know—my education, training, expertise, and experience; who I know—my social and professional networks.
  • Co-creation (or the Crazy Quilt Principle).  Since entrepreneurs tend to start the process without assuming the existence of a predetermined market for their idea, they don’t know who will challenge it and see little value in trying to figure that out. Instead, entrepreneurs generally take the idea to the nearest potential user. Some of the people they interact with make a commitment to the venture, committing time and/or money and/or resources and, thus, self-select into the new-venture creation process. 
  • Affordable Loss (or the Manage the Downside Principle).  Expert entrepreneurs think in terms of affordable loss rather than expected returns. Instead of calculating upfront how much capital they will need to launch their project and investing time, effort, and energy in building that capital, the effectual entrepreneur tries to estimate the downside and examines what he/she is willing to lose. The entrepreneur then uses the process of building the project to bring other stakeholders on board and leverage what they can afford to lose together. 
  • Leverage Contingencies (or the Lemonade Principle).  This principle is at the heart of entrepreneurial expertise—the ability to turn the unexpected into the profitable. Expert entrepreneurs learn not only to work with surprises but also to take advantage of them. In most contingency plans, surprises are bad—the worst-case scenarios - but because entrepreneurs do not tie their idea to any theorized or preconceived “market,” surprises can lead to valuable opportunities.
What does all this have to do with President Trump?  Look, we could debate whether or not Donald Trump is as successful as he says he is or as much of an entrepreneur as he claims to be but let's not.  Rather, let's assume, for the sake of argument, that he would fall into the category of "highly successful entrepreneur".   

Once you take that step, and you familiarize yourself with the principles of effectual reasoning, you have an alternative interpretation of his actions.  For example, when Trump reportedly asked "Why can't the US use nukes?" many people were horrified.  Seen through an entrepreneur's eyes it could be that he was just exploring the means at his disposal.  Likewise, Trump often floats ideas via Twitter without any staffing or planning.  It could be a sign of dysfunction or it could be that he is merely looking for enough co-creators to move the yardsticks knowing that he can control the narrative with another tweet tomorrow.  He certainly seems to have a disdain for in-depth preparation and forecasts and a preference for action.  Likewise, his approach to the North Korea summit seems to be all about managing the downside risk.  All of this is consistent with someone who is an effectual instead of a causal reasoner.  

Given that virtually all of the governmental enterprise is built around causality and deliberate planning and virtually all of the intelligence enterprise is built around forecasting, it is no wonder that there is a disconnect between the president and the intelligence community.  

Other explanations have been offered, of course.  Trump has been called everything from a sociopathic narcissist to a bumbling idiot to a tool of the Russians to a genius playing n-dimensional chess.  There is certainly evidence consistent with all of these hypotheses.  I am here to suggest one more - the effectual reasoner hypothesis.  I think that there is some good evidence to support this view but, more importantly, it gives real insight into how the intel community might be able to effectively pivot in order to better support this president and this administration.  On the off chance that he is "just" an entrepreneur, here are some things that occurred to me about how the intelligence community could improve its communications with the president:
  • Spend more time talking about opportunities.  We all give lip service to "opportunity analysis" but the truth is the intel community focuses on threats far more than opportunities.  Entrepreneurs want to control the narrative, not react to others.  Look for ways to frame the analysis as an opportunity for action, not as a response to a perceived threat.
  • Teach him the downside.  If Trump is an effectual reasoner, he is highly sensitive to the downside of any deal.  If you know there is a downside, make sure he knows it too.  If you just think there are some downside risks, expect him to ignore you, however.  The best you may be able to do is to define the field of play with bright red lines.  Don't expect him to give much credence to forecasts, no matter how well thought out and nuanced.
  • Re-think how you communicate estimates.  The IC has spent a good bit of time over the last decade thinking about and revising the estimative language it uses and what that language means.  While all this work has been good, it may be meaningless to Trump.  No matter how well we define phrases like "highly likely" and "virtually certain", it probably doesn't matter to an effectual reasoner.  There may be other formulations (eg Does "X will happen (moderate confidence)" = "X is highly likely to happen (high confidence)"?) that could satisfy both the president and the intelligence methodologists.  It would be worth exploring.
  • Talk to him the way he talks to others.  This may have been tried already but I would think the IC's classified twitter-like service, eChirp, would be a perfect way to communicate with this president.  The PDB would be more of an all day thing rather than just in the morning but "chirping" headlines with links to video or graphics that gave deeper insight would certainly take advantage of Trump's well-known preference for short form communications.  Combined with some of the other ideas on this list, it might offer an opportunity to get the president's feedback before it make the news.