RV Editorials

Online RV-Land:
The Perils of RV Discussion

August 2003

An editorial essay about how, in a field of people who can be as highly organized as most RV methods require, a bit of 'common structure' to conversation--or at least some small consideration given to others when having it--isn't much to ask. -- PJ

Other folks like you can contribute editorials if you wish.
Or tell me how much you hate mine if that's more fun.
Use my Contact Form.
In other words, all of us are more the same than we are different. That we give overwhelming attention to our perceived differences gives rise to much of the human drama.
-- Ingo Swann
Remote Viewing and Its Conceptual Nomenclature Problems

I'm Talkin' About RV Sessions!

In session discussion, there are different aspects to consider. One cannot judge everything in RV by the same criteria, because RV overall has several different facets, all of which have value.


    The first issue is protocol. For anybody serious about RV and at all trustworthy as a viewer/person, it is made public and never exaggerated or blurred.

    If a session had a monitor who knew the target, that must be stated. If the session had no monitor but the tasker was present during any of it, that must be stated. If the viewer suspected the nature of the target (or even KNEW the target going in), that must be stated.

    If there is no feedback beyond the tasking intent, that must be stated. If the feedback beyond a picture then included four web searches, an encyclopedia, and 2 hours of discussion with three other people who viewed the target, that should be stated. If feedback included a lengthy talk with the tasker, that should be mentioned.

    The tasking itself should be stated, as the detail of that even in small ways can have a substantial effect on the session (as anybody who has ever done much dowsing knows, one word can change a result). How much a tasker's mental intent overrides his word choice has yet to be nailed down, and might be different in the layman's world than science anyway (as many things are, based on a different viewing skill in participants).

    This is not so that we as viewers can beat up on anybody who is not working double-blind, and it is not so that we as viewers can beat up on anybody doing a session that does not have feedback. The fact is that some sessions are not doubleblind and can't be, and some targets don't have feedback and can't have it (and many never will). As viewers, eventually we are going to have to deal with that, whether it be by interest (such as non-feedback targets) or by requirement (such as being close to non-blind about an emergency situation).

    This is not the ideal, or the optimum, and it should be known to every viewer that double-blind and feedback ARE the established protocol for formally calling something "RV". But that does not mean that remote viewers have no other options for an occasional personal session, or that anything outside that protocol should be demeaned or dismissed.

    As a testament to "RV" the formal term, sessions out of protocol don't qualify. That does not mean they cannot obtain valid psi data, and experience, or offer 'food for thought' in a viewer-social context, or speculative discussion.

    We should acknowledge these issues so people can quit arguing about it. I am a nut about proper protocol myself, but not to the exclusion of any validation of personal experience and potential psi outside it.


    I personally do not care about methodology, in the same way I really don't care about religion. I do not consider methodology critical to RV, in the same way I do not consider formal church affiliation or ritual critical to spirituality. So it means little to me whether someone uses CRV rigidly, CRV loosely, their own method with a couple minor aspects of CRV mixed in somewhere, their own method altogether, or whether they use TDS methods or HRVG methods or so-called ERV or XYZ-RV instead. I mean seriously, I just do not care. The endless alphabet-soup-groups in RV are a little wearying to me, as they so often seem to create more division than cohesion in the field at large.

    But some people do care, and because this field is (alas) steeped in the "methodology focus", viewers having discussions in public about a session should endeavor to include their methodology as a clear note just like protocol is. As some methodologies may be structured for certain reasons, a focus on methodology may invite people with more experience to comment on issues that could have improved the session. If a methodology was not meant to be strictly used, making that clear should prevent anybody from insulting the paperwork as technically imperfect, as structure was obviously not the focus in that session.


    One aspect of a remote viewing session is the degree of validation it gives others about the session in question. Sometimes, a session is presented as a way to 'validate' a remote viewer's skill. Other times--including any session out of protocol--a session merely exists as a curiosity, and has no statement on the validation of a viewer. Whether this issue is part of a given session should be made clear.

    If it is not made clear, any out of protocol session will bring legions of critics, who will generally not address any aspect of the work at all except this issue. If it is stated clearly that a session is NOT at issue for viewer or data 'validation', then the issue of viewer and data validation should have no reason to be part of the conversation.

    Obviously, neither data nor viewer have any validation at all from that session if it was done out of proper protocol. So? That does not mean that the session has no other aspects worth discussion.


    The only validation of data is actual feedback which specifically matches session data. Not only is this non-existent for some of the more interesting targets, but feedback which is tangible, and specific, is sometimes difficult to come by even on relatively earthbound mundane targets. Even movie-sound feedback of a given moment will fail to include information that may be picked up in a session; even going physically to a target site will not tell you everything. And much tasking in the 'Layman's RV Field' as I call it, is... somewhat lacking in precise pinpointing of time and/or something that would allow specific feedback.

    Like the understanding of protocol, every viewer in a discussion should already understand from the context what data is 'validated' and what is not: it does the viewer and target a disservice to be overcritical (some data IS 'inferred', though not proven), or to be overskeptical (some data has not yet been evidenced, but science or the future will uncover it, so we admit it isn't validated but be patient, as who knows?), or to be hyperbolic or too UNcritical (assuming data is correct when there is no feedback, or assuming a viewer is terrific based on a few sessions, or based on anything done out of protocol, etc.).

    In discussions about sessions on targets which do not yet have feedback but someday likely will, it is only fair to allow a wide margin of "speculative room" about the data. One of the most astonishing, detailed, and later proven factual remote viewings was Ingo Swann's work on the planet Jupiter. Most of the data was considered impossible and amusing at the time. Over the years, many if not most of the major points in his RV have come to be proven by later science explorations.

    It goes to show: you never know. If we don't have feedback, we cannot fairly validate anything, but we cannot fairly dispute it, either.


    The whole mentality of considering any session a pass/fail experience depending on the data is itself problematic. Nearly all viewers in the layman world--and frankly just all viewers anywhere--are endless in development; most viewers on the internet are certainly "still developing." We are all STILL LEARNING. The value of a session comes in many forms, and the worst sessions often teach the best lessons.

    That this is not more present in the minds of viewers is evident from how few sessions besides "the good ones" are shared from viewers' personal files. I once posted several sessions, selecting those which had the most valuable lessons in them for discussion. The funny (but not) result was someone later telling me, "Well I saw your stuff, and figured you weren't very good!" I wasn't sure if I felt like laughing or crying. Obviously, the point of my sharing those had been completely missed.

    The point of session discussion should not be to demonstrate prowess--that MUST be done in a double-blind, provable situation--talk means nothing. Walk the walk, in protocol, or it just doesn't count. (A little more walking and a little less talking in this field would go a long way, come to think of it.) The point should be to cover what has been interesting and educational to a viewer and so might be to others. Frankly, such lessons and observations often come from mistakes made in sessions. Making judgements on viewers in this way discourages people from sharing, and definitely from sharing anything that was poor-but-educational.

    When you practice basketball, every practice free-throw and lay-up has value. You may miss the basket. You may trip over your own feet. But it is practice, and you learned a little from that. Enough practice, and the value of your learning becomes exponential, and you start succeeding more often, and in more depth. When viewers are influenced by peers to consider a session's value based on the pass/fail mentality, they are done a great disservice. It is important that developing viewers consider every single session a fabulous opportunity for learning, and set out to discover what that might be.

    If a viewer is working a job where they get paid for accuracy or paid-by-the-accurate-data-point, then accuracy matters. Until such time as that is the condition someone presents, all sessions discussed publicly really ought to be seen in the larger light of the many aspects of RV.


    Completely separate from the issues of protocol, methodology, or the validation of anything, is the issue of the session experience itself. Remote viewing ranges from intangibly annoying to spiritually astonishing, from brief and distant to extensive and experiential -- even fully dissociative. Sessions can result in trauma in viewers, as well as cognitive dissonance, and not just the 'hard' targets. RV can be a 'high impact' experience, whether it is so in one session, or whether the ongoing experience of sessions gradually makes it so. Just the fundamental psychological changes that double-blind RV eventually brings in a person are affective. Anybody not profoundly affected over time by the process of RV is probably not doing RV.

    One of the things most needed for remote viewers is the ability to discuss the psychological and experiential aspects of their session work, and its affects on their thoughts, belief systems, relationships, spirituality, and other areas of life-size importance. There is a time for debating protocol, method, or validations from a session, and there is a time when instead, the real issue should be the viewer's sharing of their experience.

    If a viewer cannot discuss a session experience without getting verbally beat up because there is not yet feedback on the target, then we have really limited the value our "community of viewers" can provide for all of us. When a viewer wants to talk about an experience, and another viewer wants to talk about a different aspect of things, those should be done in separate postings or threads. No viewer should feel ignored or rebuffed or even attacked over protocol issues like lack of feedback, when what they really need is to talk with someone they relate to about an experience which moved them.

    Anybody mature enough to do RV and talk intelligently about it ought to be able to see this aspect of RV as important to viewer development, and to have the courtesy or compassion to separate different facets of RV that are inappropriate mixed together in conversation.


    The most interesting aspect of RV is surely that it is able to obtain information which is currently secret, or even unknow-able, by other means of query. Sometimes obvious feedback answers all the questions. But it is no surprise that anybody viewing for awhile will begin to expand their own boundaries of experience and interest, by moving into the occasional target that does not have feedback, and for that matter might not ever have it.

    One of RV's many great potentials is the ability to come up with data about currently-unknown things which may spark the consideration, or creativity, of people in the here and now. Whether it is future technologies or the details of other planets, the context of archeological anomalies or the ineffable quality of human experience, RV paves the way for every viewer to have a genuine exploration of the world--and themselves. Such experience and information can lead to creative (or just previously unconsidered) ideas about what is possible.

    We know from science that multiple viewers getting the same data does NOT validate the data. Sometimes their knowledge of this having happened may in fact be what retrocausally appears to have affected their session to make it so, as they take that as a form of feedback/confirmation (one of many confusing catch-22's in RV). Viewers with existing close associations (especially team environs), the same tasker, or other minor elements in a protocol or situation can also probably be seen in session results, particularly for novices.

    (I am obliged to add here that a viewer believing this to be so, is likely to increase that being so, and that the proper belief system development of a viewer would focus away from such potentials, and instead focus on the need for a viewer to clearly delineate what constitutes feedback and NOT to validate their session data for anything outside of that.)

    My point is, the only feedback is feedback, and what another viewer got, what the tasker thought, what the analyst thought, doesn't qualify as that.

    If you let those things qualify as feedback and validate you, you will get more and more of them, until your sessions are totally polluted with stuff from your tasker, analyst, monitor, later in the day, the next movie you see, the next target you work, and goodness only knows what else. A hugely important part of the discipline that viewers must hold is a clear decision before the session about what constitutes feedback, and refusing to allow anything else to feel validating. The psychology's avoidanced of major belief system changes will quickly take advantage of a viewer willing to validate something that is not feedback.

    But let's be honest, even I can't deny how fascinating it is that separate viewers sometimes get the same data about a given target. It doesn't matter whether the target is out in space or hidden in a middle eastern country, session data--especially when there are multiple sessions on the same target--is still interesting for "speculative discussion" about "what might be."

    This kind of opening to new ideas, discussing options, even brainstorming possibilities, is a healthy part of anybody's intellectual and creative development. It is a normal part of any social group of viewers, who are going to share the things they find most mysterious and intriguing as a matter of human nature. There is a time for this kind of discussion as well.

    It should be clearly stated as such, because like the above note on session experience, it is not appropriate to mix debates about things like protocol in with speculative discussion. All that happens if this is done, is that viewers feel attacked, creative discussion is dimmed if not killed altogether, and the "social fabric" of the group having the discussion is more harmed than helped by the process.

    Speculative discussion should be labeled as such, and kept separate from separately-titled talk about more mundane session issues.


    Different Measures. There are so many ways to measure accuracy it makes the eyes cross counting them. And that's just science. There's quite a few ways used by laymen viewers as well (some markedly unscientific of course). Since they all use a different means of measure, and those measures are valuable for different things (and completely useless as indicators of other things), there is little point to comparing different value systems for this.

    What is fabulously 'accurate' by one means of measure ("appeared to be describing the right target on both sessions; that's 2-for-2, you're 100% accurate!") may be a pitiful showing by another ("only 30% of data points justified as likely accurate"). Whereas a great measure by that same latter example ("98% of all data points accurate!") can be pitiful by another measure ("All this data is generic and applies to most any target on earth and beyond; beyond which there are no decent concepts at all, and a list of a zillion simple descriptors about every differing element of the target provides mostly a dictionary of confusion, not context.")

    Different criteria. Related to but also beyond the above, there's the issue of what data is considered accurate. People have different criteria for counting what data points are accurate. Some, it has to be IN the feedback. Some, it has to be 'obviously inferred from' the feedback. Some, it has to be 'reasonably possible' based on the feedback. For others, if it even seems to address the intent of the tasker, it is assumed accurate, even if there is no actual feedback. Even if viewers are using the same general means of 'measure', there's no fairness of comparison if their criteria within that measure is completely different.

    These examples illustrate that discussions on accuracy should probably be had in areas shared with others who specifically use the same sort of accuracy measure and criteria for determination--or at least, where all the viewers understand these issues and are flexible about understanding others may differ.

    It is annoying when a viewer uses one measure they take seriously, only to have some other viewer waxing poetic about their assumedly "better" accuracy rate when it's ridiculous to compare, or the "fantastic 100% accuracy rate!" of some viewer who would be considered a poor novice by any other measure.

    Oversimplification and Hype. I personally feel the phrase "A direct hit!" should be erased from viewer vocabulary, given it's so badly applied so often. Partially accurate data, even data which has zero feedback but seems to match tasker intention, is often given this hyperbole from enthusiasm. Even when a session IS a really nice in-protocol session, other terms could be used that value more than the overall hit/miss concepts.

    It's nice that we want to be supportive and encouraging to viewers, but going overboard tends to bring criticism from others, refocusing the discussion into an argument about protocol or measure or personal judgement of others. The person who loses most is the original viewer, who might never have made such bold claims themselves, and who might really want to discuss an experiential or speculative aspect of the session. My advice: be supportive but be realistic -- do other viewers the favor of not being more harm than help in your support.


    I figure somebody will read this and think, "Why does all this stuff matter? Why can't viewers just have a simple conversation?"

    Well, because whether viewers CAN have a simple conversation that has value to them and others depends on the conversation not being redirected, refocused, or reviled by other viewers, who wish to address different aspects of the overall remote viewing process than the viewer who is sharing.

    I see viewers who want to discuss an experience but others only want to talk about whether their data was accurate or not. I see viewers who want to speculate about their data and its interest, only to have others complain about the protocol problem of feedback not yet being available. I see viewers who want help with a methodology point, only to have others either dismiss the import of any method or address some other aspect of their session.

    I think all viewers are self-taught. Even those formally trained often have very limited time with their trainers. Even for the few who do, all learning comes from within, whether it's RV or martial arts or anything else in life: teachers can 'present' information, but only the student can 'learn' it.

    I also think most viewers on the internet are learning as much about the medium of communication and the social environ of the RV field as they are about RV itself. Many new viewers have never considered all these aspects of RV and how they differ in conversation. They do a few sessions, they're delighted, they start to talk about it, and they end up feeling ignored or abused due to the response of other viewers.

    This is one of a million tiny 'educational' issues in the RV field at large that can contribute positively to a viewer's personal experience and ability to have positive experiences with others. An awareness of these issues in a group of viewers makes a huge difference in the quality of conversation. And an open statement from someone about what aspect of RV they wish to converse about, should help keep the discussion on that thread focused on the viewer's interest; discussions about other aspects can be addressed separately.

After reading yet another of the endless number of online 'debates' about RV...

...These are my thoughts for the day.


You can send email to PJ Gaenir about this editorial.

Editorials Menu

The Firedocs Remote Viewing Collection is now a static archive (Feb 2008). Click here to see what's still online for reference.

All contents on this website are Copyright © 1995 to present by Palyne 'PJ' Gaenir. All rights reserved.
Permission is given to reproduce anything in small quantity, but online only, and please mention/link source.