Remote Viewing

Viewer Profiling
with notes on arbitrary scoring issues

Viewer Profile: An ongoing database of the detail of a Viewer's sessions. Each data component they provide is judged for accuracy against the feedback. A Viewer's overall and specific abilities are judged by their Profile, which details quantity, accuracy, and consistency for that individual. VP's are required by quality instructors as well as by RV project managers. "Serious" Viewers are expected to keep one. (Despite that the paperwork can sometimes take longer than the session!)

Here's a description of the items on the session profile sheet, and what data goes in them. Also, some conversation about accuracy determinations and arbitrary scoring issues.

Viewer Name or ID# of person doing the Viewing.
Target# The target # (or "coordinates") for the session.
  This can be written in after the session is over, if you wish to not hear them until you are ready for Stage 1. Target #s can be defined by the Viewer or Monitor based on something as arbitrary as date, if preferred. For instance, a target first Viewed by Vwr#154 on October 04, 1997, might have the # 041097 154001. Continuing sessions on that target would be ....154002, 3, etc. There is no requirement that a target have any number. The point of the number is to provide an "address" for the Viewer to focus on without giving away any info about the target. However, "Describe the target" as a direction would do just as well. Also, the number (if determined as above) provides a reference to the date of the session, Viewer number, session sequence, etc. Random numbers also work fine, however, they are no help for Viewer Profiling.
Vwr-Loc The Viewer's location for the session. This can be specific such as coordinates, general such as a city or state, or a combination, such as "my house, AZ."
Date Date of the session.
  If you are "continuing" a session on the same target on a different date (such as over the midnight date change), that "one session" would be databased under the initial session number and therefore initial date. If however you are doing a separate session on the same target, you would have revised your target# to indicate it was the next session, and so your date would be whatever day that next session is done.
Start Time Time the session is beginning.
End Time Time the Viewer writes "session end."
Notes Any comments the Viewer may wish to make, such as that they just drank caffeine or ate, or anything else that, when looked at in combined retrospective, might give some clue to how these things affect their sessions.
  The notes field is not to refer to physical or emotional discomforts, ideas about the target to come, etc. Those are part of the CRV structural layout, not part of the titles.

After the main data collection
(methods, such as CRV) process:

When a Viewer has finished the data collection part of the session, there are three things still to be done, if they are maintaining a Viewer Profile.

1) A session summary. (Actually, this is done as part of CRV structure, regardless of profile.) The Viewer goes through their session notes from the beginning, and writes down all the data in complete sentences. If the data stands alone, the sentences may read, There is red. There is reflectivity. If the data is combined, the sentences may read, There is a tall, squarish structure with windows. The structure has white. The structure has shutters. You can combine related data into one sentence or leave it separate, as long as you are using sentences. This causes your mind to consider associations and relationships of data that you may have perceived, but may not have recorded. When you begin doing this -- or, attempt to do this for someone else'se session as an exercise -- you will shortly see how many ways data can be interpreted if it is not put into this proper summary or outline format.

You may still be Viewing while writing this summary, and if you obtain new impressions while writing it, you can include them. This is a place where some cohesiveness of data can be gained. It is also a place where AOL can influence your new data and new associations though, so beware!

Here is a summary from one of my very first (Stage 1 and 2) solo sessions. In this particular session, I did fairly well at picking up the gestalts and a few details, and then promptly went into "AOL Drive" on the central target focus, which I became convinced, due to its shape, must be a boat. My initial data was accurate; my data late in the session was not.

The target is wooden, light colored, a single focus, with wood beams. Multiple growth; tall vertical skinny things around it. Blue sky. Focus is large. Strength. Peace. Outside. Cold/Chilly. Main colors white, brown, green. Fresh tangy smell. Focus is pointed. Water.

Please note that this is not a very proper summary -- not only did I not use complete sentences, but I didn't include all the session data in the summary. I was going to alter it to be a better example but decided that would be a tad bit dishonest. :-) While writing this file I can only find this one beginning session to use as an example, so I'm stuck with it for now.

2) A data outline. The Viewer goes through their summary and writes the data into an outline format that associates anything which should be, for the sake of determining accuracy in the session profile. For that session above, here was my outline (I put it in outline form, which is more difficult in HTML so I'm just listing it here in paragraph form):

There is: (water, green, tall, strong, aliveness, wide area, curving, growth, fixed form, hardness, manmade [of natural materials], multiple things [at the side, growing, related-like-family]). Focus is: (large, graceful, balanced.) Distant horizon. Something high to the right. There is: (white, brown, off-white, tan, strength, peace). Smells like: (tangy, specific, salty). Something in/on/floating. Focus has: (flatness, woodenish, natural but smoothed, corners on edges like square beams). It is: (cold, sloshing, wet). Open space to the left. Vertical. Cloth or canvass [light color]. Blue. Open space behind the focus. Connected vertical things.

In other words, if you specifically say that the structure was white, that is different than saying "the structure has white" or "there is white at the target." Whether or not your data is accurate is going to depend upon what you actually said. It does not "count" to say there is a green structure, if the target has a white structure surrounded by a green lawn. In that case, "structure" would be correct, and "green" would be correct if it were listed as stand-alone data. The "green" data, were it considered part of the structure, would be incorrect. Chances are when you begin doing this you will find lots of data that is worded incorrectly but you really knew what you meant. Mark it wrong! Learn from that. You can be the best Viewer in the world, but if you can't communicate clearly, your data is useless.

Be very literal here. This forces a Viewer to really pay attention to how they say things, how literally they recorded their impressions and in what detail. This profiling will help you see your learning curve, and much of learning is communication. If you don't do it right, record it accurately and next time you might remember.

Some people prefer to write a data outline as the session summary, combining the two. This is acceptable.

3) A session profile. This is where you record the totals of the accurate/inaccurate/total data, for entry into a database of some sort. (If you don't currently HAVE a database, but are serious about RV, make these sheets and someday when you get one, you'll have easy entry for it.) Going from your data outline, where you have compared your session data to feedback, you count how many total perceptions of a data type, how many were accurate, how many were inaccurate, and how many had no feedback for determining accuracy. (If there is no feedback, you cannot score a data point. However, this gets a little complex. See "arbitrary scoring," below.)

When databased over time, your Viewer Profile will show you what data types you tend to pick up, and those you don't; what data types you tend to have a high rate of accuracy on, and those you don't. It will also show you what quantity of data you tend to get, both overall and in each data-category. It will also show you what quantity of AOL-data you tend to get and how accurate it is. It will also show you your consistency -- Viewers tend to vary, particularly at first. A Viewer Profile is critical for an instructor of advanced students to have for reference; it is critical for a tasker in applications to have as well. If you are a remote viewing student and you were not taught anything about Viewer Profiling, I recommend you consider adopting this traditional part of RV and begin to include it in your routine.

For the session above, here's the breakdown on my profile sheet:

Data Items: Smells=3 Unk. Colors=4 correct, 3 incorrect. Temperature=1 unk. Textures=3 correct. Sounds=1 incorrect. Ambience=4 correct. Composition=2 correct, 1 incorrect. Names=1 incorrect. Sizes=2 correct. Shapes=3 correct. Relationships=1 correct, 1 incorrect, 1 unk. Positions=3 correct, 1 unk. Lives=1 correct. Structure=3 correct.

Things that I didn't count initially because my first profile sheet didn't have room for them: Gestalts=1 correct, 2 incorrect. Other=1 correct, 2 incorrect. Also, I counted my AOLs for the record, though they're not part of the profile: Emotions: 3 incorrect, 3 unknown. Names=1 incorrect, 1 unk. Relationship=4 unk.

So the session ended with 33 countable (feedbackable and non-AOL) data components. 26 were correct; 7 were incorrect. So my 'overall' accuracy for that single session was 78.79%. (Total accurate divided by total countable points.) My accuracy for this session on the data-type "sizes" was 100%, and on the data type "sounds" was 0%, and on the data type "composition" was 50%.

This is just one session. Overall numbers and single session numbers mean nothing. You will want to get at least 100 sessions done and logged into your profile just to begin, and then make a report on each data type. You can design a graph that will immediately show you what types of data you get the most input from, and which types you don't tend to pick up on at all. Another might show you what types of data you have the highest error ratio in. If you really want to learn remote viewing, learn to understand yourself, and continue to improve, keeping track of these details so you can direct yourself is important.

(For the curious, the target above turned out to be a white wooden church with a pointy steeple, set on a green lawn, surrounded by tall thin trees, under a blue sky. No water is in the photograph, and the season/temperature is difficult to judge. There is a clock face on the front of the church and a small sign in the grass, neither of which I mentioned any part of.)

A tasker will task toward the strengths of a number of different Viewers, to provide the best possible results from the collected team. For instance, a Viewer very good with motion and composition might be tasked toward those elements of a target, while another Viewer better at purpose and concepts will be tasked toward that. A teacher on the other hand will task toward the weaknesses of a student Viewer, to help them learn to acquire certain types of data, and to help them get practice and feedback in those data types they are not as intuitively accurate with or prone to picking up on. (Obviously, if a student isn't taught to keep these records and doesn't share results with the instructor, this part of the education wouldn't be possible.)

In the science lab, sessions are scored differently. This is a dramatically different way of coming up with a number. Don't confuse these session profile numbers with those given by formal lab Viewers. They cannot be compared.

Arbitrary Scoring

What if you describe something, and when you get feedback, it is clear that what you described is involved in the target—or could be—or has been—but you cannot see it clearly in the feedback photo or visit to the site?

Here is where "arbitrary scoring issues" begin. I asked Joe McMoneagle for some casual input to these common session profile dilemmas.

Q: If you have "revving motor sounds" in your data, and the target feedback is a photograph of a yacht, is that data accurate? The yacht does have a motor. However, there is no indication from the photo whether or not that motor is revving or even on at that moment. Maybe the revving is one of many motors that could be assumed to be near the boat in dock. Or not.

A: Yes. It's implied by the photograph.

Q: If you say there is a biological or human at the site, and there is not, can you include as feedback the fact that a human almost had to be at the site in order to take a picture of it?

A: Probably. But, it adds nothing of value to the target, unless it's a significant component of the target; e.g., target needs people to be a target. In other words, you can say that "theoretically" about 95% of the targets--so who cares.

Q: If your target is one object or structure, and you describe both that item and another behind it, are the accurate descriptions of the item behind the intended target considered correct?

A: Depends on the targeting instructions. If you were asked to describe only the target, no. If you were asked to describe the area, yes. If you are using it as a training target and are practicing CRV, no. If you are using it as a training target and just wondering about how your mind might be working, yes.

Q: If you describe fire and burning, and the target turns out to be the cold, charred remains of a house, can you count the fire/burning data as accurate?

A: Depends on what your targeted time period is. If you said describe the target, it could be a yes. If you said tell me about the target this moment, it could still be yes, if you were trying to determine the condition of the target. Or, no, if you specifically wanted to know something other than condition about the target. In this case, I would tend to yes in most circumstances.

Q: How can you say you have feedback for anything like a smell or sound or temperature or direction etc. from photo feedback?

A: You can, only if it's implied. In other words, if it's a "stretch" forget it, you've just missed the target.

Q: Where do you draw the line between "Proven accurate," "almost certainly accurate but not in the feedback," "probably accurate," "possibly accurate, could go either way" and "inaccurate?"

A: Accuracy is a measure of what you can prove to be in the target. That can include the photographs, or the site (if an outbounder was used), or everything that is pertinent to the site, dependent upon how it was targeted.

Q: How do you decide which of those "in-between" determinations should simply be called "no feedback for scorable data" instead?

A: If there is doubt--forget it. And there will always be doubt about something. Just let it go and move on to the next target. It's one of the reasons RV will never be 100%.

One thing Joe's responses made clear is that proper targeting is important. (Targeting in this case means: the definition of the remote viewing task by the person assigning the target. "Tasking" would be what instructions that person actually gave to the Viewer. "Targeting" is the definition [not told the Viewer] of what that tasking is aimed at.) Targeting is important to profiling and accuracy determination. It is difficult to separate all the aspects of remote viewing from their effect on each other. If the targeting for a given target, compared to results upon feedback, is not clear enough to make the accuracy of the session clear, this tells you something about a need for improvement in that area.

Science Horizon Web Media

All logos and original content Copyright © 1996-2001 to Palyne "PJ" Gaenir. All rights reserved.