REMOTE VIEWING

RV Articles & Editorials

www.firedocs.com/remoteviewing/RVEditorials.cfm


Target Definition and Session Intent

May 2005

Note: This is not a formal review of Courtney Brown's book REMOTE VIEWING: The Science and Theory of Nonphysical Perception... I will get around to writing one eventually. This article is a response to conversation about the theory that 'analysis drives the session' and in a larger context, Brown's book (in part) as it addresses this topic, and my views on the topic. I might add, that just because I have my own views on this subject which do not concur with Brown's, does not mean that I don't like the guy personally. I don't know him personally. I've been involved with remote viewing since 1995 and he--like many others--has put in many years of effort. I recommend that everybody be open to new ideas and to make the effort to discuss them ("hash them out" as the saying goes).


Other folks like you can contribute articles if you wish.
Use my Contact Form.

The Role of Analysis in Viewing Sessions and Apparent Results

There is a theory1 that: analysis is the true driver of session results.

In debate, this is easy to beat up on, due to the "omission" of other critical factors--such as that many things relate to session result.

However, it's only when the theory is "overdone, and used to the exclusion of the larger context of psi" that it runs into most of its problems.

An idea which gives this (limited) support is:

If the viewer views what is most important about the target, the analyst's2 interpretation is going to be one of the things--and likely one of a few major things--that is used to create the psychic composite the viewer brings through.

Notice however, that the above still has the Viewer responsible for their data; it is simply that on a psychic level, they are responsible for getting the data which matters most. The tasker intent, the feedback, the analytic results, and other factors "matter", and so may be taken into account (psychically) by the viewer.

But pondering the theory that "analysis is the true driver of session results" while leaving out of mention or consideration the obvious issues that

  1. the tasker and the end-user and the viewer's response to feedback (and feedback itself) are also very important, and
  2. the flux of "how" important any of these things are in combination is likely to vary

...makes it seem like "analysis creates the target definition."

What "really" determines a target, and how much analysis matters, touches on basics of viewer development that any viewer can experiment with:

  • The more you validate ANY source but feedback AS feedback (e.g., getting the next target in series, your friend's target, something that happens later that night, another target in the pool, etc.), the more the viewer will get data from these sources. Basic "learning theory" is enough to explain this! (But subconscious avoidance of hard-feedback might help.)
  • The more the viewer practices a strong mental clarity, intent, expectation, etc. and the more they can intentionally force their belief about there being only ONE target, the less often and to less degree they are generally impacted by overlay, displacement etc. This factor alone ought to make it evident that the viewer is the 'primary' factor in viewing. (Why this would ever have to be said is beyond me!)

That is highly misleading, since seen in context it seems more like,

"The viewer has many sources of psi, and every person/factor in the viewing's process is important to the end result, and may contribute to what the viewer subconsciously chooses to perceive (and even how well it goes)."

Notice I said contribute to, not determine. In my view, the viewer is the variable point of perception; the center of gravity or 'balance point' in a psychic sense.

*

There is an "extension" of the "analysis drives the viewing" theory. This extension is a whole new concept kind of glued onto the first, the idea that:

The analytic review of a session--comparing it with decoys or a target different than the Viewer was tasked for--will psychically influence the viewer away from describing their target and toward describing the other 'targets' reviewed instead.

This theory involves the idea that you cannot separate tasking and analysis; that analysis retroactively affects the tasking, and the tasking defines the target.

Originally when this theory was put forth, it was by Calabrese, but the layman's online RV field eventually evolved the term "retro-tasking" to describe "the invasive/persuasive effect of an analyst upon the viewer's session" (or as implied, the target definition).

Notice this removes the Viewer from being the primary determinor of the session data and results, and suggests that someone else psychically influencing the viewer (often to the detriment of session accuracy) is an unavoidable factor.

If this effect were avoidable, it would mean the viewer had the strength of intent to bring about what they wanted, instead. Which would take this back into the realm of being a simple matter of viewer responsibility and resultant skill.

If that is the case, then most of these terms do not describe 'remote viewing problems' but rather, 'issues novice viewers must learn to deal with'. An an allegory, problems like 'falling down' and 'missing the ball' and 'missing the basket' are not considered basketball problems, but rather, are considered player skill issues--the more skill they have, the less any of those are likely to be an issue.


[1. I should call this a hypothesis. It's not an actual 'theory', except in the layman sense of 'an idea'.]

[2. I should say, "the Evaluator's interpretation and judging", because real analysis is done in a doubleblind.]


Displacement, Bleedthrough and Overlay

Displacement

In some situations such as Associative RV and regular RV where rank-order scoring is done, a viewer is assigned a target, and a judge/evaluator has to compare their session against not only the target, but against other photographs, sometimes called decoys. If the viewer very clearly and specifically describes something in one of those options, and it turns that option is not the target, this is referred to as "displacement". In other words, the viewer "displaced" their attention onto "the wrong option."

Applications (also called 'operational') Remote Viewing to some degree ends up fitting into a similar "evaluation" model, because at some point, a person is taking the session, and comparing it to every possible way it might answer the question, every location or situation or person (or whatever) that it might apply to, to find the best fit.

My own theory is that most things called displacement are done so incorrectly at the more novice level; any data not matching the target, when there are other options, is likely to match something else, and if the viewer sees it as feedback they're more likely to pick it up just from having that extra source of psi info. (I might add that official ARV protocol prohibits the viewers from seeing anything except the target feedback--the tasking is actually to describe their feedback experience. However in the layman's field, some viewers do their own 'judging' for ARV, and so see and evaluate their session against all three possible target options.)
My own idea I've been spouting for years--I mention it just by the way--is that the obvious displacement may be subconsciously intentional. With a group of highly diverse options, any 'missing' of the real target may potentially describe one of the others, but the data might also, by sheer accident or luck still be interpreted as best fitting the real target. The only way to ensure this cannot happen -- that the result is negative -- is to really WELL describe one of the decoys, so that it is sure to be chosen instead of the target. As to "why" the psychology might choose to do this, that's another topic. From occasional psychological rebellion you see in all kinds of psi work, to the viewer subconsciously not wanting and/or not really believing in the outcome success would bring (e.g. winning the lottery, and the weird tendency of ARV groups working on this to nail the numbers on the ONE time the tasker forgot to buy the ticket...!), who knows? It's just an idea.

But there IS obvious displacement on occasion, particularly in ARV, even for 'good' viewers. (I say this hesitantly, as there is the reasonable argument that since viewing accurately is the measure of a viewer's skill, that the more often they 'displace', the less they are qualified to be considered a good viewer. In any case, viewers considered good by other measures do experience displacement on occasion. Note I said ON OCCASION. It is not chronic... it's just occasional.)

The theory above about analysis, would suggest that displacement is caused because "the other targets" and/or "the mind of an analyst looking at the other targets", somehow reached in and distracted the viewer, who was helplessly unable to avoid being negatively affected by it.

As Associative RV is usually done on future outcomes (though it can be applied in different ways), another theory-set is that the future is still uncertain and that either

  1. The option chosen was 'what would have happened at the time of viewing' if something else had not occurred later; or that
  2. We actually exist in all probabilities, and/or exist in one 'probability line' but are constantly shifting, and the line the viewer was in at the time of viewing matched their choice.

(Note that option (b) is the same as option (a) except that (a) is rather simple, causal-chain-of-events theory, and (b) is more a Sethian or 'quantum' theory about personal reality.)

Bleedthrough

Some viewing situations are designed around viewing a target selected from a collection of 'potential' target. For example there may be a database of photographs, and whichever is chosen by the computer as 'the target' (before or after the session) is what the viewer is expected to describe.

Note that the viewer is supposed to be describing the target-location at the moment of, and in the focus of, the photograph. (They are not trying to describe the photograph, but the location/thing it represents.)

If the viewer describes--whether clearly and specifically, or only in part--other "options" from that collection, either instead-of or merged-into their current target data, that is often called "bleedthrough from the target pool."

When the 'bleedthrough' happens not based on a current pool, but based on the viewer describing "the next target in line" or "what I experience tonight" or "my best friend's target" instead, then it is usually called displacement instead.

There are other ways of running into the feedback problem, such as when a viewer works a target, gets feedback, and then decide to spend 2 hours reading the internet for more details about the target. By hugely expanding the feedback, they retroactively are modifying the target definition. Another way of thinking of this, is like archery. If you have a target, and you shoot at it, and you hit it, then you have done well. If after the shot and feedback, you then re-define your target to be a whole area, rather than just the one target within that area, that's normally called 'cheating'. :-) Of course, viewers get plenty of information not in feedback, and it's understandable they want more feedback. But for the sake of learning theory and keeping the "data+feedback=learning" loop going, viewers should try to resist blowing protocol by doing this. Save those searches for the rare but occasional session that deeply moves you, so much that you'd happily trade the feedback value for the experience of validation.

There are other forms of 'bleedthrough' or 'displacement', for example if your target is a location and on the way to the location you witness a car accident, if you describe that car accident instead of the target--or mix it in with your target data -- that is generally considered pollution from "around/near" the feedback experience (in other words, the viewer had their mind set for 'feedback' and so that experience fit into their 'feedback' experience--and since psi may draw from future feedback, that is thought to potentially influence a session).

This effect could be called bleedthrough, or displacement, or overlay.

I hope readers can see how these all get into the same dynamics, but in different situations. The terms are different but the fundamentals are the same. Another of the same dynamic is:

Overlay

Some viewing situations, after the session is complete, result in data that 'seems to be' (it may not be, but sometimes really obviously is) related to something in the intent or experience of the tasker, or the monitor, or the evaluator, or some other person who is directly involved in the overall remote viewing experiment. Overlay based on the Tasker's Intent (conscious or subconscious) is usually called Tasker Overlay, for example. At root, all overlay based on the source of info being a person is just a form of what Ingo Swann calls "Telepathic Overlay".

There are protocol problems that can make more of this show up in a session. But if the Remote Viewing is structured well without "secondary relationships and confounding factors" involved, this should not be an issue.

In the layman's field, a more insidious and common issue that brings this on is the natural tendency for viewers to want to (a) view in groups, and (b) take a role that is subservient to a leader, from a formal 'student and teacher' role to the simple 'tasker and viewer' role. From similar session data on the same/similar tasks done separately, to in some cases, literally paranormal experiences that correlate with the group's idea-energy (the belief systems of the group and particularly the leader), this is a rather frustrating issue, because it creates a 'comfounding' factor to any problem that is encountered by or addressed by a 'viewer-group'--or any 'expert' leading such a group and using it for his or her research.

One way some small groups try to avoid these overlay issues (and that of the guru-factor) is to make everyone in the group an equal, either by removing the guru factor with a distribution of management of the group (such as the TKR Remote Viewing Project), or by sharing the tasking equally between all participants (something which several online view-groups do). That doesn't necessarily remove the tendency for "rapport" between viewers in the group, but it does mitigate some of the more extreme issues.

Causation and Prevention

At the heart of many debates is the causation factor for these effects. Some people say, "Having more than one target in a pool, will cause bleedthrough from the other targets!" And one has to admit that there is evidence that "it happens" with viewers.

However, many viewers--including the best in the world--suggest that whatever you focus on and validate will become stronger, and whatever you dismiss and train away from, will become less frequent or less evident. (Come to think of it, that does not sound like a logic that is difficult to believe, since it works pretty well for every other subject in the world as well!)

This would suggest that it isn't really the external situation (other possible targets, what the analyst or tasker was thinking), but that the viewer simply has to have a strong enough focus to define the target psychically as needed. This issue may in fact be a big part of defining "remote viewing skill".

In the end, remote viewing isn't just about describing the target; it's also about excluding everything that is not important to the target within the context of the session.


Viewer Development and Learning Theory

There are other issues that reflect on displacement and overlay.

A really clean protocol is a basic of course. Avoiding secondary relationships and confounding factors--which is very difficult to do in view-groups without very careful protocol planning--is a given. Aside from that, most critical of all is usually considered to be the mindset of the viewer.

If the viewer is psychologically validated by their guru going, "Oh, that is SO psi, but on the wrong target!" they're going to be harmed by that just like in learning theory giving someone the wrong feedback will harm the process. You give NO feedback -- no validation, no interest, nothing -- when something is wrong. Learning theory is often covered in freshman psychology so I think most people are probably familiar with these concepts. They apply to everything humans do including remote viewing. Providing "emotional" or "validation" feedback, even when technically the hard feedback doesn't match a session, is incredibly subversive to a viewer's progress.

When the analysis-drives-viewing theory is taken out of the larger viewing context, its logic at best suggests that viewers are simply viewing the target 'through' the analyst's interpretation (or when lacking feedback, just viewing the analyst!), and at worst suggests that the viewer is irrelevent, and we might as well give a random encyclopedia draw to the analyst and let them read it like tea leaves and 'retrotask' it to fit their whim.

So, when the theory is "analysis CAN--and in most cases likely DOES--have a significant effect on the viewing results, as do other factors such as tasker and viewer" one is on pretty safe ground. Yes, the analyst/analysis has an important role in the overall viewing process.

But when the theory is taken out of context and magnified into, "analysis is what really determines the definition of the target, and so analysis that compares the session to things that are not the target will likely mess up the session", then it's invoking a lot of other assumptions. Such "seeming" effects are much more easily explained by psychological dynamics in groups and by poor focus on the viewer's part, than they are by the analyst having more power over results than the viewer.

Since Courtney Brown's book and an interview and public discussion about this are the spawn of this article, let me give an example from that source.


The Viewer's Job

Back in like, 1997 or so, Brown was already dealing with the fact that he and his viewers were having significant issues with bleedthrough and displacement.

He initially went to lengths to get around it in some technical (non-viewer) fashion, by constructing incredibly long and elaborate tasking that a friend and I (who reminded me of this) called EPIC CUEING back then.

In other words, instead of putting the responsibility and control for focusing on THE TARGET in the hands of the viewer, he attempted to force it more from the outside, as part of the tasking. Now IF this had been more an issue of protocol -- of making things "simpler and cleaner", of making processes "more separate" so as not to have influencing factors etc., this would be a good focus. It doesn't seem this was the result though.

Here is a small excerpt of something published by Farsight way back when4.

Try this variant on "describe the target" (so much for 'clean and simple'!):

EDUCATION THROUGH DEMONSTRATION PROJECT: DEMONSTRATION #7. This target is to be done solo. This target utilizes the new anti-influencing/anti-blocking procedures of The Farsight Institute. You may read the target cue, but do not worry about it. The instructions are for your subspace mind, not your conscious mind. Just do the session normally. You will need to pick your own target coordinates! Use a page of random numbers or our free and downloadable target management program to get your target coordinates. YOU NEED TO WRITE THE COORDINATE NUMBERS THAT YOU ARE USING ON THE TARGET CUE PAGE. BE SURE TO PRINT OUT THIS PAGE AND DO THAT!

Demonstration 7: TARGET COORDINATES: The viewer is to perceive and to describe the next target for The Farsight Institute's Education Through Demonstration Project (that is, demonstration #7) that will be de-encrypted and revealed to The Farsight Institute from the encrypted cue given to The Farsight Institute by Dr. Mark Spraker. The essential cue and the qualifier for the target are defined by the essential cue and the qualifier as written by Dr. Mark Spraker. It is essential that the viewer strictly adhere to all aspects of the target limiter. The limiter for this target is the following: THE VIEWER WILL PERCEIVE ONLY THE INTENDED TARGET THAT IS CURRENTLY ASSOCIATED WITH THE ASSIGNED TARGET COORDINATES. THE VIEWER WILL NOT DESCRIBE ANY BEING, OBJECT, OR INTANGIBLE THAT DOES NOT EXIST IN THIS TARGET. THE VIEWER WILL REMAIN FREE FROM ALL NON-TARGET INFLUENCES.


Brown, like his teacher Ed Dames, can make even the simplest things sound like a term paper. :-)

Not surprisingly, in the long run this did not work, though it might have shown promise at first (as most 'changes' to an RV process tend to).


[4. Thanks to Skye Turell for the reference.]


Rationalizing 'Misses'

Still suffering these issues, Brown has since then built a huge edifice of theory to validate that when a viewer gets data that is not target data, they are in essence still right. It's not that they are wrong. It is that the target is wrong.

So the viewer is never really wrong, you see!--rather, it's just that the definition of the target is different than the tasker and feedback indicate!--something else interfered, so how could the viewer help it?

Many novice viewers have the logic, "Yeah, but it's obviously still psychic!" Perhaps true. But a more skilled viewer, evaluating their own work or that of those they may mentor, usually has the response of, "Yeah... but so what? It isn't the target." Not Brown! My cynical side wonders if this is because (a) he as a viewer suffers it as well, and/or (b) his ego won't admit that students using his methods properly still aren't as good as he wants them to be.

Try telling a scientist, "Well the Target was A, and the Tasker said to describe A, and the feedback to the viewer was A, but it makes perfect sense that the viewer described C, because the later analyst compared the session data to A, B, C, and D!"

That displacement, overlay and bleedthrough have been seen in the lab before and documented only indicates that psi science has been fairly good about recognizing the many effects which plague viewers; but that they were once documented as occasional issues, does not mean they must be issues. There is a reason that psi research labs pay for world-class viewers to work with them after all! -- And that's because they have good enough skill to view 'through' these potential issues.

small personal rant


Finding What You Look For

Brown openly says in more than one place in the book that when he/his viewers had certain "experiences" with their viewing (where results were not as accurate as they would have liked them to be), he then got 'suspicious' about what was causing it -- apparently "viewer skill" is not a consideration -- and then "did research" (set out tasks, had viewing, analysis, scoring, etc. -- he loves charts and graphs) to see if this suspicion turned out to be true. And -- amazingly enough -- he was right!

small personal rant

The primary/central theory of the book thus far is that analysis is what drives RV results, and related to this, that how well a viewer does on a target depends on how many other possibilities exist to "distract and influence the viewer away from the target".

Funny enough, this logic in general is not new; it's just that this is by far the biggest public effort to date made to validate poor viewing.

Now I don't mean that seeing overlay or displacement means a viewer is bad, since everybody has suffered 'overlay' -- from various sources -- and displacement (in a zillion ways). I suspect this is like missing the basket in basketball--it happens even to the pros, but it happens A LOT LESS to them, and has on the whole far less detriment to their results. Since resolving such problems is our goal, we might take a minute to consider what the actual expert viewers say about it. (Someday when I have more time I will dig out quotes and stick in here, but I'm busy, so don't hold your breath.)

Ingo Swann and Joe McMoneagle, the people who have really walked the walk and demonstrated that they've worked hard to get a handle on the dynamics of how to be successful viewers -- both suggest in their own various writings that cleaning up the protocol problems that can contribute to these issues, and clear focus and helpful belief systems on the viewer's part, and clear feedback to viewers (both because feedback is a source of psi data, and because it is part of learning theory as 'corrective'), is the route of best-solution.

Brown's hypothesis implies, by its efforts, that there is no major responsibility on the viewer's part to resolve these issues because the 'problem' is actually being caused by external factors.


It Isn't The Target -- It's All Those OTHER Targets!

I was about to define the 'real' point of the book when I actually found it in the book:

"The fundamental substance of this volume delves into the problem of defining what makes a target a target."

Target definition is one of the subjects that has been very, very poorly addressed in the remote viewing field, as has analysis, so it isn't surprising to see these two issues both getting tripped over as viewers attempt to recreate the wheel nobody has shared in detail.

I think to be called legitimate science, Brown's work will need to be redone with vastly fewer science mistakes than his book examples. Even to an "interested layman familiar with the science" such as myself there are glaring problems. (I have publicly stated I'll let the scientists beat him up for those instead of me though.) I will say though that when "research" (good or bad) is driven by a need to validate an outcome rather than an objective desire to learn more--no matter what the answer is--the science is highly likely be flawed, confounded or biased in some way.

Brown's justification for why viewers (as he knows them) overly-suffer overlay and displacement, is based on the idea that the definition of the target is polluted, and for some mysterious reason, the viewer is not expected to be psychic enough to determine what is important and relevent to their tasking, or to psychically resist the existence/influence of other people.

The viewers I consider mentors expect a lot more from a viewer, I guess.

The book includes a FORMALLY NAMED (after himself) section which seems small enough to excerpt, and it's highly relevent to this discussion. It reads:

BROWN'S RULE: The probability of being able to successfully remote view a target is inversely related to the size of this target's probability space relative to the combined probability space of all alternatives to this target under random choice conditions. For example, it is easier to remote view a target successfully that has a one in 500 chance of being chosen from a pool of 500 targets than it is to remote view a target that has a 0.5 probability of being chosen from a pool of two targets. This is due to the increased coherency of the interference caused by the probabilistic potential of the competing attractors that are associated with each of the possible targets when they are few in number. This coherency is produced by mental/ observational activities which intentionally or unintentionally link the remote-viewing data collection process with the alternative targets. The most general conclusion to draw from this rule is that determinism results not from increasing the probability of a single outcome, but by eliminating the probabilistic coherency from the alternate attractors.

small personal rant

The first problem in the above paragraph is one that most viewers are supposed to learn about and deal with as novices, so it seems sort of ridiculous that he'd be stuck in it still:

THERE ARE NO OTHER TARGETS.

There is only YOUR target. No other targets exist. Other pictures might exist; so? Trillions of pictures exist in the world. The more a viewer is able to hold this psychological model that there is only ONE target -- not "lots of targets and they have to describe one of them" -- the LESS bleedthrough they get. This can be experimented with by ANY viewer. It's the same dynamic as how the more a viewer believes that they can view what is 'important and relevent about the target', the less they get all kinds of BS that might have 'floated through the tasker or analyst's heads' in their session.

Instead of suggesting the viewers focus better, or that maybe, as science has suggested all along, only a small percentage of the population are truly cut out to view at "world class" level, Brown validates whatever data they get that is inaccurate but matches anything else trackable (ensuring yet-more of it) and projects responsibility onto everything and everybody except the viewer. All I can say is that if I were a viewer who'd been subjected to this, and realized just how harmful this is to viewing skill, I would be pretty upset about that.

The second problem in "Browns's Rule" above is namely that "reality" contradicts it -- that is, the reality of viewers good enough to do professional lab work, vs. whatever Brown and his viewers are doing.


Reinventing Psi Science

It is a repetitive issue in Brown's book that he takes on legitimate science done with top quality viewers, tries it in his little group of viewers--likely highly influenced by his expectations--and when they do not do well enough to support the same findings the paper he's referring to does, he decides that this research is then obviously wrong. His own alternative leaps to conclusion however, are so far from science that even I, a layman, wouldn't dare take them! -- such as:

...The analyst would unavoidably need to interpret the data, and this interpretation would by itself create a target, or at least create a biasing influence on the real target in the viewer's mind.

So Brown is talking as much about 'remote influence' as 'remote viewing'... and is assuming that the viewer is helplessly unable to determine their own focus. Another person merely looking at their session and comparing it to say, one of five possible target-locations, will mess it up!

This reasoning might work to validate or excuse why he and his viewers have, in his words, "serial" displacement and overlay/mixing problems in sessions, but I personally think his own problems as a trainer may be responsible for the "high degree" of this. After all, while these may be issues every viewer may stumbles on, and then occasionally experience, they are hardly "serial" in quantity or degree with anybody I know.

A huge portion of the book, taken by tone and content, is a big case for why one of the best and most legitimate physicists doing psi research in our world today (Dr. Edwin C. May, the physicist who led the majority of research in what's now called "The STAR GATE Program"), is simply wrong about nearly everything, despite having had many millions of dollars to work with, decade(s) for doing it, and some of the best viewers in the world to work with as his background.

Instead, Courtney Brown's tiny group of students have proven all that stuff wrong or bad-ideas and he's going to tell us how it really is! I realize that psi is historically associated with huge egos, but good grief!

As someone with a great respect for science in general and psi research in specific -- and for some excellent scientists who have certainly given up far better money and reputation to study psychic ability instead of more traditional fields -- it is difficult for me not to find this some kind of chutzpah. Brown's novice-level research mistakes will probably get him crucified in peer review, assuming the field's scientists decide to recognize his self-published book publicly. He not only does not seem qualified as even a beginner-scientist in psi research, but his controls and self-education on the subject are so poor that I have to wonder how he got the Ph.D. (in Political Science) he's already got.


Viewer Skill

The issue of viewer skill comes back when Brown says (referring to one of May's many analytic processes):

...We would not want to have a viewer attempt to perceive that level of precision with regard to colors. [...] Indeed if they spend much time on hue differentiation, they will risk having the conscious mind intervene with these subtle distinctions, and they will not have much time left to go after the more important things..."

Sheesh! Viewers train themselves to go after the important things. That's their job. That's what defines the skill of a viewer. In fact, a viewer's own 'psychic definition' of 'what is most important or relevent about a target within this viewing context' is normally considered much of the definition of the viewer's target. (The viewer's target may contain other factors than the feedback, tasking or analysis--but that is a topic for viewer development, not for this article.)

What he is saying here is that, "My viewers cannot deal with the analytic overlay issues that all viewers face." Well.... ok! But what does this thing specific to his viewers and their issues of skill have to do with objective science -- as measured by a formal science lab using obviously more developed viewers?! I mean, Brown clearly implies (I can't find the quote right now) that unless the world of science does it his way from now on it isn't real science (because only he knows how to do it right, as the book explains in detail). That's a heck of a lot of arrogance for someone whose sole contribution to the field has been oblitering in the public media the respect-value of 40 years of science of which he had no part!

There are a variety of 'little issues' as well. For example Brown pointedly states that science uses inexperienced viewers, as opposed to his much better idea of using viewers with better-developed skills. This is false. It was the case in the past, and nowdays is deliberately so only when the research relates to that issue (or is studying something about distribution of results in the population), and in the past sometimes in response to critics insisting that if psi were valid everybody should be able to demonstrate it, some trials were. But the subject has been addressed plenty in the psi research field, and for a VERY long time, the vast majority of legit science has used the most highly qualified psychics/viewers they could obtain. There is a REASON that Ingo Swann, Pat Price, Joseph McMoneagle, and others were employed full-time by science labs, after all!--they were the best viewers in the world.

This is only one of many (too many to count here) areas where Brown makes some big "assumption about how it is in science" that is not even accurate to begin with, and then expounds on why he knows better. If it seems exasperating to me, I can just imagine how the actual scientists in this field must feel.


The Original Hypothesis

Back in March of 2003, Prudence Calabrese -- who had once attended Courtney's Farsight Institute and had gone on to manage the institute for some time, leaving after the infamous "Hale Bopp" media fiasco -- departed the 'internet field' of remote viewing participants and posted an article suggesting that psi sessions could be used as a "distributed network". She suggested the hypothesis that a person totally separate from the viewer--and in fact, separate from any participation in the regular viewing process, and even after the whole process including analysis/feedback etc. were done -- could influence the viewer's session/results by re-analyzing the session against a different target. The implication of the article was that 'someone' (the inference drawn by most was some black-ops or para-gov't/para-military group) was actually doing this and purposely stalking her team and that this was related to why they were abruptly closing down.

This was considered a little paranoid by most, or an excuse for viewing issues that protocol problems and viewer-group dynamics are known to cause, even if the viewers are good. In most of the field there was a resounding thud of silence in response to it. I made a point to post the article and address it with other viewers on an email list I ran at the time (RV Oasis/pjrv). A viewer named "Eva" online suggested (back then) that if analysis against non-targets affected a session, that perhaps this could relate to why ARV so often suffered clear 'displacement' (where the viewer clearly describes an option that is not the target). That was an interesting idea, and as a result, ARV and 'displacement' have ended up blended in with online discussions about what is normally called "retro-tasking" (analyzing an existing session on a different target than it was done for, in the thought that this might influence the viewer/session psychically). There are slightly different--but related--topics given other names in the layman's field as well.

We (a collection of viewers) had a lot of discussion about it at the time and since. (I closed the list when my time ran out, but the archives are available if anybody wants to jump back to around March 3 and then go forward; visit http://www.groups.yahoo.com/group/pjrv/ .)

So much of the ideas in Brown's book were covered back then, that it's hard to see where he might diverge into novelty. There is one thing he addresses that I don't think I've seen: the idea that, how probable it is that one target gets chosen rather than another, matters to viewing results, because every-other-probability is a "distraction" for the viewer.

The problem is that his "viewer-group led by Official Expert" has -- just by nature of viewer-groups with gurus of a sort -- built-in protocol problems. Psi research has certainly documented such issues as "sheep and goats" (viewers corroborating other viewers's data for reasons that seem to be based on psychology, not the target), and "experimenter effect" (when the beliefs or expectation of the person leading the trials seems to be clearly evident in the results--even when the protocol is as clean and tight as possible).


Other Views

Viewer Ingo Swann, talking about telepathic overlay for example, made it clear that the psychosocial relationship (and who was dominant in the relationship) could relate to this, as well as problematic protocol issues.

Viewer Joseph McMoneagle, talking back in Spring of 2003 about this specific subject when it came up (following Calabrese's article), had the following responses to my questions about some of these ideas. The fuller transcript is in the yahoo group link above (message #2847).

PJ: Can a tasker, we'll call her Kelly, assign a second tasking intent -- a second, different target -- to one of my existing sessions, and have any effect on that session whatever?

Joe: No. With a lousy remote viewer that might be possible, but with a remote viewer who knows what they are doing -- it shouldn't make any difference what Kelly does aside or in addition to what the original tasking and expectation might be.

PJ: Is the answer to that dependent on my own 'strength of intent' as a viewer?

Joe: Of course. How malleable you might be and how easy it might be to [mess] with your mind.

PJ: The 'why' might be, to get data on a target that the viewer does not know about.

Joe: You will always get the data on the target you want if you do the protocol properly and set up the intent and expectation correctly. Why do it any other way? Unless of course you are someone looking for a reason why it doesn't work right.

PJ: It seems a little similar to the effect of accidentally getting two targets, which I've done, such as trying to RV the target in the envelope and it turns out there's two. Mostly the result is just what I've seen myself in that particular case: the viewer has a difficult time making solid contact, seeming to go back and forth between the targets, and usually ends up with a "conglomerate" of the targets -- enough to seem like a real session on either target, but probably not as good a session as they might have had if their targeting/tasking/feedback had been singular.

Joe: I could respond with [...] "A good remote viewer would know when there are two targets and would differentiate between them." I've had to do that on more than one occasion when being targeted live on camera by bozos who do not follow the protocols and rules with regard to how they identify the specific target they want me to work. Having said that however, it actually is simply that -- something the remote viewer has to learn to deal with and recognize. [...] In real life, this stuff happens, and the viewer should be learning to cope with it, instead of copping out with "I got it wrong because the person who set up the target didn't do their job." I hope you hear what I'm saying here. The viewer needs to focus on "intent and expectation" with regard to the target -- not the target alone.

PJ: Might it be that 'intent' is really the secret law of the universe here, and that all the shamen and casteneda-ish sorcerors were right all along -- that it's really whomever has the strongest intent that will dominate any given energy transaction? Could this apply to RV as well?

Joe: Yes that might be true. But, whose intent carries the day? Really bad remote viewers are swayed by the intent of someone standing in the corner of the room. Really good remote viewers develop their own intent and methods that are designed to override all other forms of intent. Really bad taskers never understand intent at all. Really good taskers use really good viewers intent to work for them. Really bad judges never understand why intent is even necessary, and really good judges understand that they are being just as psychic and are just as blind as the remote viewer and intent is everything.


Layman's Research

Given the science problems within it, I consider Brown's work to be 'layman's research', but I put a high value on layman's research. It's been my observation that many important things have been discovered and implemented by laymen (particularly people working as engineers, botanists, and in other more 'experimental' fields).

The psi research field is so hard-pressed in the USA at the moment, that part of me hugely appreciates ANY research, even one person bothering to document their own subjective experience I consider of genuine value.

It is very tempting, very easy for viewers to make the mistake of coming to conclusions after too little work, or instead, of setting out to 'see if something is an issue', which usually amounts to seeing if they can MAKE it an issue, since they psychically and psychologically affect themselves, and if they work with a team, the others as well.

Psi research is not really an easy field. I know it is common for people with PhD's in other fields to think they just by-proxy know everything, and such people often do research without any decent study or conversation with those genuinely experienced, and end up really blowing it on some basic control issue that was already learned 20 to 30 years ago. It's depressing for the field, frankly. This field needs all the help it can get.

I think Courtney Brown should be credited with putting a lot of work into this book. And I really dislike having to comment on things that make it sound like I am dissing his viewers, because I wouldn't. He and his team seem earnest and dedicated.

The book though, is like an entire monument to why the viewer is not the primary point of responsibility for the session results. It has been my experience that sharing that outlook has done actual harm to the development of viewer skill.

The book expounds on why all sorts of things, using his logic, cannot work well, which in fact DO work pretty well. Whether it's viewing from a (carefully crafted pool, in science), or viewing with rank judging applied, Brown's proposition seems to be that since these really don't work, he'll explain what does. But they really do work, already. And he wholly fails to realize that viewing for operations (where a similar evaluative-comparison to session data happens) is in the same boat as stuff he is dismissing as essentially unworkable.

While some could argue that one or more of the methods used in science doesn't share the seeming results of other areas--that if they work, it is not at a high enough degree in science--I would say that's because science is vastly more discriminating, measures differently by far, and I consider that a good and useful thing. There have been incredibly high-effect studies but unfortunately they require tremendous expenditures of money to do right, and money is the one thing the psi research field just doesn't have.


Reputation and Credibility

When I see Brown's team able to view at the level of Joe and Gary and others currently working in world-class labs, when I see Brown appearing to be sufficiently educated about half the things he's claiming to be an expert on, I might begin to take some of this more seriously. Although I will probably never agree with what I perceive to be an almost chronic attack on the existing research in the field, research he clearly doesn't know enough about to be discussing let alone insulting.

Since his entry into the RV field, Brown has displayed so many issues with faulty logic, omission and even evasion of proper protocol, obliviousness to interpersonal effects, overenthusiasm for (and I quote) "nearly omniscient!" results, and of course that patronizing arrogance, that it is just very difficult to see his current theories--and his 'research' designed to prove out his theories--as much different. Some of his 'results' can be seen even on the surface as likely stemming from a different variable than he thinks.

Regardless of books or science, the issue of what improves a viewer's results is of great import to every viewer. Any thinking on it, any research on it, is a good thing. In the end, it always comes down to viewers. I don't agree with his ideas on analysis forcing viewers' data--nor do I feel his research as shown in his book is legitimately enough done to qualify as evidence--but I support his efforts to learn something new and document it for others.

PJ

[end]

You can send email to PJ Gaenir about this editorial.

Editorials Menu



The Firedocs Remote Viewing Collection is now a static archive (Feb 2008). Click here to see what's still online for reference.

All contents on this website are Copyright © 1995 to present by Palyne 'PJ' Gaenir. All rights reserved.
Permission is given to reproduce anything in small quantity, but online only, and please mention/link source.