Representational systems in NLP
In Neuro-Linguistic Programming (NLP), representational systems refer to the different ways in which we represent and process information from the world around us. There are five main representational systems in NLP with abbreviation V-A-K-O-G, which are:
- Visual: This refers to the use of visual images and mental pictures to represent and process information. People who are primarily visual learners tend to think in terms of pictures, colors, and shapes.
- Auditory: This refers to the use of sounds, words, and music to represent and process information. People who are primarily auditory learners tend to think in terms of sounds, tone of voice, and language.
- Kinesthetic: This refers to the use of physical sensations and feelings to represent and process information. People who are primarily kinesthetic learners tend to think in terms of touch, emotions, and bodily sensations.
- Olfactory: This refers to the use of smells to represent and process information. People who are primarily gustatory learners tend to think in terms of smells and flavors.
- Gustatory: This refers to the use of tastes to represent and process information. People who are primarily olfactory learners tend to think in terms of tastes and flavors.
In NLP, individuals are encouraged to identify their preferred representational system(s) and to use this knowledge to improve their communication and learning. For example, if someone is primarily a visual learner, they may benefit from using visual aids such as diagrams or videos when learning new information. If someone is primarily an auditory learner, they may benefit from verbal explanations or discussions. By being aware of and using the different representational systems, individuals can improve their communication and understanding of themselves and others.
Predicates in NLP
In NLP, predicates refer to the words or phrases that people use to describe their experiences. These predicates provide clues about how people process and represent information in their minds. By paying attention to the predicates that a person uses, an NLP practitioner can get insights into their thought patterns, beliefs, and attitudes.
There are 2 types of predicates in NLP, including:
- sensory-specific predicates, which describe information that can be perceived through the senses (seeing, hearing, feeling, smelling, and tasting) and
- non-sensory specific (or abstract) predicates, which refer to concepts or ideas that cannot be directly perceived.
By paying attention to the types of predicates that someone uses, an NLP practitioner can tailor their language to better connect with and influence that person. For example, if someone tends to use a lot of visual predicates, an NLP practitioner might use visual language and metaphors to better communicate with them.
Sensory specific predicates in NLP
Sensory-specific predicates refer to the language used to describe sensory experiences. Sensory-specific predicates include visual (sight), auditory (sound), kinesthetic (touch), olfactory (smell), and gustatory (taste) predicates.
These predicates help to identify the preferred representational system of an individual and can be used to communicate more effectively with them.
For example, someone might use visual predicates when describing a memory, such as "I see myself walking down the street" or "That looks familiar to me." Someone might use auditory predicates when describing a conversation, such as "I hear what you're saying" or "That sounds like a good idea." And someone might use kinesthetic predicates when describing a feeling, such as "I feel uneasy about this" or "I have a gut feeling that something's wrong."
Here are more examples of sensory-specific predicates:
- Visual: "I see what you mean", "That looks interesting", "Can you show me?"
- Auditory: "I hear you", "That sounds good", "Can you say that again?"
- Kinesthetic: "I feel great about this", "That feels right", "Can you grasp what I'm saying?"
- Olfactory: "That stinks", "I smell trouble", "The air is fresh"
- Gustatory: "That leaves a bad taste in my mouth", "I crave something sweet", "The food is spicy"
Using sensory-specific predicates can help build rapport with individuals and increase the effectiveness of communication by matching the language used to their preferred representational system.
Non-sensory specific predicates in NLP
Non-sensory specific (abstract) predicates in NLP refer to language that does not relate to the five senses, but rather to abstract concepts, emotions, and thoughts. They are useful in communicating about internal states, beliefs, values, and ideas that cannot be directly observed. Non-sensory specific predicates can be used to explore the deeper meanings behind sensory experiences and to access and change inner experiences.
Some examples of non-sensory specific predicates include:
Using these words can help to clarify a person's thoughts, beliefs, and desires, and can assist in creating more precise and effective communication. Non-sensory specific predicates can also be used to shift a person's focus from sensory experience to internal experience, which can be useful in reframing LINK 6 or changing limiting beliefs and behaviors.
Submodalities in NLP
Submodalities in NLP refer to the sensory attributes or qualities that we use to represent our internal experiences. They are the building blocks of our subjective experience and include things like brightness, color, distance, volume, and texture.
Submodalities are important in NLP because they play a significant role in shaping our thoughts, emotions, and behaviors. By identifying and manipulating submodalities, individuals can change the way they experience and respond to different situations, which can lead to more effective communication, personal growth, and behavior change.
For example, in NLP, a technique called "swish pattern" LINK 7 involves changing the submodalities of an undesirable experience to transform it into a more desirable one. This technique involves creating a mental image of the undesirable experience and then using a visual or auditory cue to quickly switch to a mental image of the desirable experience. By repeating this process, the submodalities of the undesirable experience are gradually replaced by those of the desirable experience, leading to a positive change in behavior and emotions.
Some common submodalities that are used in NLP include:
- Visual Submodalities: These include attributes such as color, brightness, location, and size.
- Auditory Submodalities: These include attributes such as volume, tempo, pitch, and tone.
- Kinesthetic Submodalities: These include attributes such as pressure, texture, temperature, and motion.
- Olfactory/Gustatory Submodalities: These include attributes such as scent, taste, and flavor.
Overall, submodalities are a key aspect of NLP as they can be used to create positive change in thought patterns, emotional responses, and behaviors by altering the way we experience and perceive internal experiences.
Driving submodalities in NLP
Driving submodalities are specific visual, auditory, kinesthetic, and olfactory/gustatory qualities that can significantly influence the intensity and meaning of our experiences. These submodalities are considered to be "driving" because they can determine the level of our emotional response and shape our behavior.
For example, if someone has a fear of public speaking, they may have certain visual submodalities such as seeing themselves making mistakes, seeing the audience as critical or judgmental, or seeing themselves being embarrassed. These submodalities may be driving their fear and keeping them stuck in their limiting belief.
By changing these driving submodalities, through techniques such as submodality intervention or swish pattern, the person can reduce the intensity of their fear and create a more positive and empowering internal representation. They can change the images, sounds, or feelings associated with the fear and replace them with more positive and resourceful submodalities. This can lead to increased confidence and success in public speaking and other areas of life.
Eye Accessing Cues In NLP
Eye Accessing Cues are a key aspect of NLP that involve the observation and interpretation of eye movements in relation to cognitive processing. According to NLP, people's eye movements can provide insight into their thought patterns, sensory preferences, and mental processing.
The development of Eye Accessing Cues (EACs) in NLP is attributed to the work of Richard Bandler and John Grinder. Bandler and Grinder observed that people tend to look in certain directions when accessing different types of information in their minds, such as visual, auditory, or kinesthetic information. They believed that these eye movements could be used to help understand a person's thought processes and to help facilitate better communication.
To develop the EACs, Bandler and Grinder studied the eye movements of therapists and people in therapy sessions. They found that when people were accessing visual information, they tended to look up and to the left or right, while accessing auditory information led to eye movements sideways to the left or right, and accessing kinesthetic information led to downward eye movements.
Based on these observations, Bandler and Grinder developed a model of eye movements that could be used to identify which representational system someone was using at a particular moment. The EACs have since been widely used in NLP to help understand communication and thought processes, and to identify patterns that can be used to facilitate change.
The basic premise of eye accessing cues is that different eye movements are associated with different cognitive processes, and that by observing a person's eye movements, we can gain insights into their internal thought processes and sensory experiences.
The following is a general guide to the eye accessing cues and their associated cognitive processes:
- Up and to the left: Visual Constructed (Vc) - When a person looks up and to the left, it is believed that they are accessing visual imagery that they have constructed in their mind, such as a mental image of a future event or a visualization of a new idea.
- Up and to the right: Visual Remembered (Vr) - When a person looks up and to the right, it is believed that they are accessing visual memories, such as recalling a mental image of a past event.
- Sideways and to the left: Auditory Constructed (Ac) - When a person looks sideways and to the left, it is believed that they are accessing internally generated sounds or auditory images, such as imagining the sound of a loved one's voice.
- Sideways and to the right: Auditory Remembered (Ar) - When a person looks sideways and to the right, it is believed that they are accessing externally generated sounds or auditory memories, such as recalling a song or a conversation.
- Down and to the left: Kinesthetic (K) - When a person looks down and to the left, it is believed that they are accessing internal sensory experiences, such as physical sensations or emotions.
- Down and to the right: Internal Dialogue (Ad) - When a person looks down and to the right, it is believed that they are accessing their internal dialogue or self-talk.
It is important to note that eye accessing cues are not foolproof indicators of cognitive processes, and that they should be used in conjunction with other forms of communication and observation. Additionally, different people may have different eye accessing cue patterns, so it is important to establish a baseline for each individual before making any conclusions based on their eye movements.
Would you like to discover your dominant representational system and learn more about your driving submodalities? Contact us here for free 30 minute session and let's discuss it together.
The Part 5 of the series about NLP focuses on Meta Model. Continue with Part 5 here.