Research methods


Experimental method


  • In psychological research, aims are developed from theories. They are general statements that describe the purpose of an investigation.
  • Eg, 'To investigate whether drinking energy drinks makes people more talkative.'


  • After the aim, you write a hypothesis. A hypothesis is a statement that is made at the start of the study and clearly states the relationship between variables as stated by the theory.
  • In a directional hypothesis the researcher makes clear the sort of difference that is anticipated between two conditions. 
  • A non-directional hypothesis simply states that there is a difference between conditions or groups but, unlike in a directional hypothesis, the nature of the difference is not specified. 
  • Psychologists tend to use a directional hypotheis when the findings of previous research studies suggest a particular outcome. When there is no previous research, or findings from earlier studies are contradictory, they will instead decide to use a non-directional hypothesis. 
1 of 68

Experimental method

Independent and dependent variables:

  • In an experiment, a researcher changes or manipulates the independent variable (IV) and records or measures the effect of this change on the dependent variable (DV)

Levels of IV:

  • In order to test the IV we need different experimental conditions. The two conditions are thc control group and the experimental group. 

Operationalisation of variables:

  • Many things that psychologists are interested in are often hard to define. Thus, in any study, one of the main tasks for the researcher is to ensure that the variables being investigated are as unfuzzy and measurable as possible.
2 of 68

Control of variables

Extraneous variables:

  • The key to an experiment is that an IV is manipulated to see how this affects the DV. Any other variables that might potentually interfere with the DV is the IV. Any other variables that might potentially interfere should be removed. These additional variables are called extraneous variables. 

Confounding variables:

  • Any variable, other than the IV, that may have affected the DV so we cannot be sure of the true source of changes to the DV. Counfouding variables vary systematically with the DV.

Demand characteristics:

  • Any cue from the researcher or from the research situation that may be interpreted by participants as revealing the purpose of the investigation. This may lead to a participant changing their behaviour within the research situation.  
3 of 68

Control of variables

Investigator effects:

  • Any effect of the investigator's behaviour on the research outcome. This may include everything from the design of the study to the selection of, and interaction with, participants during the research process. 


  • The use of chance in order to control for the effects of bias when designing materials and deciding the order of conditions. 

Standardisation: Using exactly the same formalised procedures and instructions for all participants in a research study. 

4 of 68

Experimental design

Experimental design:

  • The different ways in which the testing of participant scan be organised in relation to the experimental conditions.

Independent group design:

  • Participants are allocated to different groups where each group represents one experimental condition.

Repeated measures:

  • All participants take paret in all conditions of the experiment.

Matched pairs design:

  • Pairs of participant sare first matched on some variable that may affect the DV. Than one member of the pair is assigned to condition A and the other to condition B.
5 of 68

Experimental design

Random allocation:

  • An attempt to control for participant variables in an independent groups design which ensures that each participant has the same chance of being in one condition as the other. 


  • An attempt to control for the effects of order in a repeated measures design: half the participants experience the conditions in one order, and the other half in the opposite order. 
6 of 68

Independent Groups - Evaluation

The biggest issue with IGD is that the Ps who occupy the different groups are not the same. If a researcher finds a mean difference between the groups on the DV this may be more to do with individual differences than the effects of the IV. To deal with the problem, researchers use random allocation.

IDG is less economical than repeated measures as each P contributes a single result only. Twice as many Ps would be needed to produce equivalent data to that collected in a repeated measures design.

The strengths of using IGD is that order effects are not a problem whereas they are a problem for repeated measures designs. Ps are also less likely to guess the aims.

7 of 68

Repeated measures - Evaluation

The biggest issue for repeated measures is that each P has to do at least two tasks and the order of these tasks may be significant. To deal with this, researchers use counterbalancing.

Order effects also arise because repeating two tasks could create boredon or fatigue that might cause deterioration in performanceon the second task, so it matters what order the tasks are in. Alternatively, P's performance may improve through the effects of practice, especially on a skill-based task. Order acts as a confounding variable.

It is also more likely Ps will work out the aim of the study when they experience all conditions of the experiment. For this reason, demand characteristics tend to be more of a feature of repeated measures designs than independent groups. 

The strengths of using repeated measures are that participant variables are controlled and fewer Ps are needed. 

8 of 68

Matched pairs - Evaluation

Ps only take part in a single condition so order effects and demand characteristics are less of a problem.

Although there is some attempt to reduce P variables in this design, Ps can never be matched exactly. Even when identical twins are used as matched pairs, there will still be important differences between them that may affect the DV.

Matching may be time-consuming and expensive, particularly if a pre-test is required, so this is less economical than other designs. 

9 of 68

Laboratory experiments

An experiment that takes place in a controlled environemtn within which the researcher manipulates the IV and records the effect on the DV, whilst maintaining strict control of extraneous variables.


  • High control over extraneous variables. This means that the researcher can ensure that any effect on the DV is likely to be the result of manipulation of the IV. Thus, we can be more certain about demonstrating cause and effect (high internal validity).
  • Replication is more possible than in other types of experiment because of the high level of control. 


  • Lab experiments may lack generalisability. The lab environment may be rather artificial and not like everyday life. 
  • Participants may respond to demand characteristics.
  • Tasks that the participants are asked to carry out are not present in everyday life - low mundane realism. 
10 of 68

Field experiments - Evaluation

In field experiments the IV is manipulated in a natural, more everyday setting.


  • Field experiments have higher mundane realism than lab experiments because the environment is more natural. Thus field experiments may produce behaviour that is more valid and authentic. This is especially the case as participants may be unaware they are being studied (high external validity).


  • However, there is a price to pay for increased realism due to the loss of control of extraneous variables. This means cause and effect between the IV and the DV in field studies may be much more difficult to establish and precise replication is often not possible. 
  • There are also important ethical issues. If participants are unaware they are being studies they cannot consent to being studied and such research ight constitiue an invasion of privacy.
11 of 68

Natural experiments - Evaluation

Natural experiments are when the researcher takes advantage of a pre-existing independent variable. This kind of experiment is called 'natural' because the variable would have changed even if the experimenter was not interested. Note that it is the IV that is natural not necessarily the setting - participants may be tested in a lab. In a field experiment the setting is natural.


  • Natural experiments provide opportunities for research that may not otherwise be undertaken for practical or ethical reasons, such as the studies of institutionalised Romanian orphans. 
  • Natural experiments often have high external validity because they involve the study of real-life issues and problems as they happen, such as the effects of a natural disaster on stress levels.


  • A naturally occuring event may only happen very rarely, reducing the opportunities for research.
  • Ps may not be randomly allocated to experimental conditions.
12 of 68

Quasi-experiments - Evaluation

Quasi-experiments have an IV that is based on an existing difference between people. No one has manipulated this variable, it simply exists. For instance, if the anxiety levels of phobic and non-phobic patients were compared, the IV pf 'having a phobia' would not have come about thrrough any experimental manipualtion. 


  • Quasi-experiments are often carried out under controlled conditions and therefore share the strengths of a lab experiment.


  • Quasi-experiments, like natural experiments, cannot randomly allocate participants to conditions and therefore there may be confounding variables. 
13 of 68

Populations and samples

The population: A group of people who are the focus of the researcher's interest, from which a smaller sample is drawn.

For practical and economic reasons, it is usually not possible to include all members of a target population in an investigation so a researcher selects a smaller group, known as the sample. 

Ideally, the sample that is drawn will be representative of the target population so that generalisation of findings becomes possible. In practice, however, it is often very difficult to represent populations within a sample due to their diverse nature. Inevitably then, the vast majority of samples contain some degree of bias. 

Samples are selected using a sampling technique that aims to produce a representative sample. 

14 of 68

Random Sample

A random sample is a sophisticated form of sampling in which all members of the target population have an equal chance of being selected.

To select a random sample; firstly, a complete list of all members of the target population is obtained. Secondly, all the names on the list are assigned a number. Thirdly, the sample is generated through the use of some lottery method (computer randomiser or picking numbers from a hat).


  • Free from researcher bias. They have no influence over who is selected and this prevents them from choosing people who they think will fit their hypothesis.


  • However, random sampling is difficult and time-consuming to conduct. A complete list of the target population may be extremely difficult to obtain.
  • You may end up with a sample that is still unrepresentative.
  • Selected participants may refuse to take part.
15 of 68

Systematic sample

A systematic sample is when every nth member of the target population is selected, for example every 3rd house.

A sampling frame is produced, which is a list of people in the target population organised into, for instance, alphabetical order. A sampling system is nominated or this interval may be determined randomly to reduce bias. The researcher then works throught the sampling frame until the sample is complete. 


  • The sampling method avoids researcher bias. Once the system for selection has been established the researcher has no influence over who is chosen (this is even more the case if the system is randomly selected).
  • It is also usually fairly represenative. 
16 of 68

Stratified sample

A stratified sample is a sophisticated form of sampling in which the composition of the sample reflects the proportion of poeple in certain sub-groups within the target population or the wider population. 

To carry out a stratified sample the researcher first identifies the differnet strata that make up the wider population. Then, the proportions needed for the sample to be representative are worked out. Finally, the participants that make up each stratum are selected using random sampling. 

17 of 68

Stratified sample - Evaluation


  • Stratified sampling avoids researcher bias. Once the target population has been sub-divided into strata, the participants that make up the numbers are randomly selected and beyond the influence of the researcher. 
  • This method produces a representative sample because it is designed to accurately reflect the composition of the population. This means that generalisation of findings becomes possible. 


  • However, stratification is not perfect. The identified strata cannot reflect all the ways that people are different, so complete representation of the target population is not possible. 
18 of 68

Opportunity sample

Given that representative samples of the target population are so difficult to obtain, many researchers simply decide to select anyone who happens to be willing and available. The researcher simply takes the chance to ask whoever is around at the time fo their study. 


  • Opportunity sampling is convenient. This method saves a researcher a good deal of time and effort and is much less costly in opportunity cost than other sampling  methods. 


  • On the negative side, opportunity samples suffer from two forms of bias. First, the sample is unrepresentative of the target population as it is drawn from a very specific area, such as one street in one town, so findings cannot be generalised to the target population. 
  • Second, the researcher has complete control over the selection of participants and, for instance, may avoid people they do not like the look of (researcher bias). 
19 of 68

Volunteer sample

A volunteer sample involves participants selecting themselves to be part of the sample; hence, it is also referred to as self-selection.

To select a volunteer sample a researcher may place an advert in a newspaper or on a common room notice board. Alternatively, willing participants may simply raise their hand when the researcher asks.


  • Collecting a volunteer sample is easy. It requires minimal input from the researcher and do is less time-consuming than other forms of sampling. 


  • Volunteer bias is a problem. Asking for volunteers may attract a certain 'profile' of a person, that is, one who is helpful, keen and curious; this also affects generalisation. 
20 of 68

Ethical issues

Ethical issues: These arise when a conflict exists between the rights of participants in research studies and the goals of research to produce authentic, valid and worthwhile data.

Informed consent: Involves making Ps aware of the aims of research, the procedures, their rights (inc. the right to withdraw), and also what their data will be used for. Ps should make an informed judgement whether or not to take part without being coerced or feeling obliged. 

Deception: Means deliberately misleading or witholding information from Ps at any stage of the investigation. Ps who have not recieved adequete information when they agreed to take part cannot be said to have given informed consent. Despite that, there are occasions when deception can be justified if it does not cause the P undue distress.

Protection from harm: As a result of their involvement, Ps should not be placed at any more risk than they would bein their daily lives, and should be protected from psychological and physical harm. Ps are to be reminded of their right to withdraw. 

Privacy and confidentiality: Ps have the right to control information about themselves. This is the right of privacy. If this is invaded then confidentiality should be protected. Confidentiality refers to our right, enshrined in law under the Data Protection Act, to have any personal data protected.

21 of 68

Ways of dealing with ethical issues

BPS Code of Conduct: Like many other professional bodies, has its own BPS code of ethics and this includes a set of ethical guidelines. Researchers have a professional duty to observe these guidelines when conducting research - they won't be sent to prison if they don't follow them but may lose their job. 

Dealing with informed consent: Ps should be issues with a consent letter or from detailing all relevant information that might affect their decision to participate. Assuming the P agrees, this is then signed. For investigations involving children under 16, a signiture of parental consent is required. 

Dealing with deception and protection from harm - debriefing: At te end of the study, Ps should b given a full debrief. Within this, Ps should be made aware of the true aims of the investigation and any details they were not supplied with during the study, such as the existance of other groups or experimental conditions. Ps are made aware of their right to withhold data and may be offered councelling. 

Dealing with confidentiality: Useful to maintain anonymity. In case studies, researchers may use numbers or initials when describing the individual. 

22 of 68

Alternative ways of getting consent

Presumptive consent: rather than getting consent from the Ps themselves, a similar group of people are asked if the study is acceptable. If this group agree, then consent of the original Ps is 'presumed'.

Prior general consent: Ps give their permission to take part in a number of different studies - including one that will involve deception. By consenting, Ps are effectively consenting to be decieved. 

Retrospective consent: Ps are asked for their consent (during debriefing) havign already taken part in the study. They may not have been aware of their participation or they may have been subject to deception. 

23 of 68

Pilot studies

The aims of piloting 

A pilot study is a small scale version of an investigation that takes place before the real investigation is conducted. The aim is to check that procedures, materials, measuring scales etc. work and to allow the researcher to make changes or modifications if necessary.

24 of 68

Single/double blind procedures

Single blind:

As mentioned when discussing ethical issues, that Ps will sometime not be told the aim of the research at the beginning of a study. As well as this, other details may be kept from Ps, such as which condition of the experiment they are in or whether there is another condition at all. This is known as a single-blind procedure and is an attempt to control for the confounding effects of demand characteristics. 

Double blind:

In a double-blind procedure neither the participants nor the researcher who conducts the study is aware of the aims of the investigation. They are often an important feature of drug trials. Treatment may be administered to patients by someone who is independent of the investigation and who does not know which drugs are real and which are placebos. 

25 of 68

Control groups and conditions

The word control in research is used to refer to the control of the variables but we also use it to refer to setting a baseline. Control is used in many experimental studies for the purpose of setting a comparison. If the change in behaviour of the experimental group is significantly greater than that of the control group, then the researcher can conclude that the cause of this effect was the independent variable (assuming all other possible confounding variables have remained constant).

26 of 68


One important non-experimental method is observation. Observations provide psychologists with a way of seeing what peole do without having to ask them. They also allow researchers to study observable behaviour within a natural or controlled setting. This method allows a researcher the flexibility to study more complex interactions between variables in a more natural way. 

Note that observation is often used within an experiment as a way, for example, of assessing the DV.

27 of 68

Naturalistic and controlled observations

Naturalistic observations take place in the setting or context where the target behaviour would usually occur. All aspects of the environement are free to vary. For instance, it would not make sense to study how senior management and employees in a particular factory intercat by dragging the whole of the workforce into an artificial lab setting. It is much better to study interaction in the factory environment where it would normally take place.

It's sometimes useful to control certain aspects of the research situation, so a controlled observation may be preferred. In a controlled observation there is some control over variables, including manipulating variables to observe effects and also control of extraneous variables. 


  • Naturalistic: Tend to have high external validity insofar as findings can be generalised to everyday life, as the behaviour is studied within the environment where it would naturally occur. That said, the lack of control over the research situation makes replication of the investigation difficult. There may also be many uncontrolled extraneous variables that make it more difficult to judge any pattern of behaviour.
  • Controlled observations: May produce findigns that cannot be as readily applied to real-life settings. Extraneous variables may be less of a factor so replication of the observation becomes easier. 
28 of 68

Overt and covert observations

Behaviour may occasionally be recorded without first obtaining the consent of the participants. Covert observations are those in which the participants are unaware they are the focus of study and their behaviour is observed in secret, say from across a room. Such behaviour must be public and happening anyway if the observation is to be ethical.

In contract, overt observations are when Ps know their behaviour is being observed and have given their informed consent beforehand.


  • The fact that Ps don't know they are being watched removes the problem of participant reactivity and ensures any behaviour observed will be natural. This increases the validity of the data gathered. 
  • However, the ethics of these studies may be questioned as people, even in public, may not wish to have their behaviours noted down. For instance 'shopping'  would generally be recognised as a public activity, but the amount that people spend on a shopping trip is probably their own business. 
  • In this sense, overt observations are more ethically acceptable but the knowledge Ps have that they are being observed may act as a significant influence on their behaviour. 
29 of 68

Participant and non-participant observations

Sometimes it may be necessary for the observer to become part of the group they are studying, as is the case with participant observations (a confederate).

Non-participant observations are when the researcher remains seperate from those they are studying and records behaviour in a more objective manner. It may often be impractical or even impossible to join particular groups so that non-participation is the only option - a 50 year old researcher can't join a year 10 class. 


  • In P observations, the researcher can experience the situation as the Ps do; giving them increased insight into the lives of the people being studied. This may increase the validity of the findings. 
  • There is a danger, however, that the researcher may come to identify too strongly with those they are studying and lose objectivity. Some researchers refer to this as 'going native' when the line between being a researcher and being a P becomes blurred. 
  • Non-P observations allow the researcher to maintain an objective psychological distance from their Ps so there is less danger of them 'going native'. However, they may lose the valuable insight to be gained in a P observation as they are too far removed from the people and behaviour they are studying. 
30 of 68

Issues in the design of observation


One of the key influences on the design of any observation is how the researcher intends to record their data. The researcher may simply want to write down everything they see. This is referred to as an unstructured observation and tends to produce accounts of behaviour that are rich in detail. This method may be appropriate for when observations are small in scale and involve few Ps. 


Often, however, there may be too much going in a singal observation for the researcher to record it all. Therefore, it is necessary to simplify the target behaviour that will become the main focus of the investigation. They allow the researcher to quantify their observations using a pre-determined list of behaviours and sampling methods. 

31 of 68

Behavioural categories

In order to produce a structured record of what a researcher sees, it is first necessary to break the target behaviour up into a set up of behavioural categories (sometimes referred to as a behavioural checklist). This is very similar to the idea of operationalisation. Target behaviours to be studies should be precisely defined and made observable and measurable. 

Before the observation begins, the researcher should ensure that they have, as far as possible, included all of the ways in which the target behaviour may occur within their behavioural checklist. 

32 of 68

Sampling methods

Continuous recording of behaviour is a key feature of unstructured observations in which all instances of a target behaviour are recorded. For very compex behaviours, this method may not be practical or feasible. Such as, in structured observations, the researcher must use a systematic way of sampling their observations. 

Event sampling: A target behaviour is broken up into components that are observable and measurable. 

Time sampling: A target individual or group is first established than the researcher records their behaviour in a fixed time frame, say every 60 seconds.

33 of 68

Structured vs unstructured - Evaluation

Structured observations that involve the use of behavioural categories make the recording of data easier and more systematic. The data produced is likely to be numerical, which means that analysing and comparing the behaviour observed between Ps is more straightforward. In contrast, unstructured observations tend to produce qualitative data, which may be much more difficult to record and analyse. 

However, unstructured observations benefit from more richness and depth of detail in the data collected. Though there may be a greater risk of observer bias with unstructured observations, as the objective behavioural categories that are a feature of structured observations are not present here. The researcher may only record those behaviours that 'catch their eye' and these may not be the most important or useful.

34 of 68

Behavioural categories - Evaluation

Although the use of behavioural categories can make data collection more structured and objective, it is important that such categories are as clear and unambiguous as possible. They must be observable, measurable and self-evident. In other words, they should not require further interpretation.

Researchers should also ensure thst sll possible forms of the target behaviour are included in the checklist. There should not be a 'dustbin category' in many different behaviours are deposited. 

Finally, categories should be exclusive and not overlap; for instance, the difference between 'smmiling' and 'grinning' would be very difficult to discern. 

35 of 68

Sampling methods - Evaluation

Event sampling is useful when the target behaviour or event happens quite infrequently and could be missed if time sampling was used. However, if the specified event is too complex, the observer may overlook important details if using event sampling. 

Time sampling is effective in reducing the number of observations that have to be made. That said, those instances when behaviour is sampled might be unrepresentative of the observation as a whole.

36 of 68

Inter-observer reliability

It is recommended that researchers do not conduct observational studies alone. Single observers may miss important details or may only notice events that confirm their opinions or hypothesis. This introduces bias into the research process. 

To make data recording more objective and unbiased, observations should be carried out by at least two researchers. It is vital, however, that pairs of observers are consistent in their judgements and that any data they record is the same or very similar. As such observers must be trained to establish inter-observer reliability. 

  • Observers shouls familiarise themselves with the behavioural categories to be used.
  • They then observe the same behaviour at the same time, perhaps as part of a small-scale pilot study.
  • Observers should compare the data they have recorded and discuss any differences in interpretations. 
  • Finally observers should analyse the data from the study. Inter-observer reliability is calculated by correlating each pair of observations made and an overall figure is produced. 
37 of 68

Self-report techniques

Self-report technique: Any method in which a person is asked to state or explain their own feelings, opinions, behaviours and/or experiences related to a given topic.


A questionnaire is a set of written questions used to assess a person's thoughts and/or experiences. Psychologisrs use questionnaires to assess thoughs and/or feelings. A study may simply consist of a question to find out about the kind of dreams people have or a long list of items designed to assess an individual's personality type. 

A questionnaire may be used as part of an experiment to assess the dependent variable.

  • Open and closed questions:

An open question does not have a fixed range of answers and respondants are free to answer in any way they wish. They tend to produce qualitative data that is rich in detail.

A closed question offers a number of fixed responses. They tend to produce numerical data by linking the answers respondants can give. Quantitative data like this is usually easy to analyse but it may lack the depth and detail associated with open questions. 

38 of 68

Questionnaires - Evaluation


  • Cost effective. They can gather large amounts of data quicly because they can be distributed to large numbers of people. A questionnaire can be completed without the researcher being present, as in the case of a postal questionnaire, which also reduces the effort involved. 
  • The data that questionnaires produce is usually straightforward to analyse and this is particularly the case if the questionnaire comprises mainly fixed choice closed questions. The data lends itself to statistical analysis, and comparisons between groups of people can be made using graphs and charts.


  • The responses given may not be truthful. Respondants may be keen to present themselves in a positive light and this may influence their answers. This is a form of demand characteristic called social desirability bias.
  • Questionnaires often produce a response bias, which is where respondants tend to reply in a similar way. This may be because respondants complete the questionnaire too quickly and fail to read questions properly.
39 of 68


An interview is a live encounter where one person asks a set of questions to assess an interviewee's thoughts and/or experiences. The questions may be pre-set (structured) or may develop as the interview goes along (unstructured). 

Structured interviews:

  • Made up of pre-determined set of questions that are asked in a fixed order. Basically this is like a questionnaire but conducted face-to-face in real time. 

Unstructured interviews:

  • Works a lot like a conversation. There are no set questions. There is a general aim that a certain topic will be discussed, and interaction tends to be free-flowing. Interviewee is encouraged to expand and elaborate their answers as prompted by the inverviewer. 

Semi-structured interviews:

  • Many interviews are likely to fall somewhere between the two types described above. Usually interviews have set questions but interviewers are also free to ask follow-up questions when they feel appropriate.
40 of 68

Interviews - Evaluation


  • Straightforward to replicate due to their standardised format. The format also reduces differences between interviews. 
  • It is not possible, however, given the nature of the structures interview, for interviewers to deviate from the topic or elaborate their points, and this may be a source of frustration for some.


  • Much more flexibility in an unstructured than in a structured interview. The interviewer can follow up points as they arise and is much more likely to gain insight into the worldview of the interviewee. 
  • However, analysis of data from an unstructured interview is not straightforward. The researcher may have to sift through much irrelevant information and drawing firm conclusions may be difficult. 
  • Social desirability bias. 
41 of 68

Designing questionnaires

Likert scales: A likhert scale is one in which the respondant indicates their agreement with a statement using a scale of usually five points. The scale ranges from strongly agree to strongly disagree.

Rating scales: A rating scale works in a similar way but gets respondants to identify a value that represents their strength of feeling about a particular topic (1-5).

Fixed choice option: A fixed choice option item includes a list of possible options and respondants are requires to indicate those that apply to them.

42 of 68

Designing interviews

Most interviewers involve an interview schedule, which is the list of questions that the interviewer intends to cover. This should be standardised for each participant to reduce the contaminating effect of interviewer bias. Typically, the interviewer will take notes throughout the interview, or alternatively, the interview may be recorded and analysed later.

Interviews usually involve an interviewer and a single participant, through group interviews may be appropriate especially in clinical settings. In the case of a one-to-one interview, the interviewer should conduct the interview in a quiet room, away from other people, as this will increase the likelihood that the interviewee will open up. It is good practise to begin the interview with some natural questions to make th participants feel relaxed and uncomfortable, and as a way of establishing rapport. Of course, interviewees should be reminded on several occasions that their answers will be treated in the strictest confidence. This is especially important if the interview includes topics that may be personal or sensitive.

43 of 68

Writing good questions

Clarity is key when designing questionnaires and interviews. If respondants are confused by or misinterpret particular questions, this will have a negative impact on the quality of the misinformation recieved. With this in mind, the following are common errors in question design that should be avoided where possible. 

Overuse of jargon: Jargin refers to technical terms that are only familiar to those within a specialised field or area. The questions should be simple and easily understood.

Emotive language and leading questions: Sometimes, the author's attitudes towards a particular topic is clear from the way in which the quesrtion is phrased. 

Double-barrelled questions and double negatives:  A double-barrelled question contains two questions in one; the issue being that respondants may agree with one hald of the question and not the other. Finally, questions that include double begatives can be difficult for respondants to decipher. 

44 of 68


Correlation: A mathematical technique in which a researcher investigates an association between two variables, called co-variables. 

Co-variables: The variables investigated within a correlation, for example height and weight. They are not referred to as the independent and dependent variables because a correlation investigates the association between the variables, rather than trying to show a cause and effect relationship. 

Positive correlation: As one co-variable increases so does the other. 

Negative correlation: As one co-variable increases the other decreases. 

Zero correlation: When there is no relationship between the co-variables. 

Correlation illustrates the strength and direcrion of an association between two or more co-variables. Correlations are plotted on a scattergram. One co-variable forms the x-axis and the other y-axis. Each point or dot on the graph is the X and Y position of each co-variable.

45 of 68

The difference between correlations and experiment

In an experiment the researcher controls or manipulates the IV in order to measure the effect on the DV. As a result of this deliberate change in one variable it is possible to infer that the IV caused any observed changes in the DV.

In contrast, in a correlation, there is no such manipulation of one variable and therefore it is not possible to establish cause and effect between one co-variable and another.

People may be anxious for all sorts of reasons and therefore their influence on the other variable cannot be disregarded. These 'other variables' are called intervening variables. 

46 of 68

Correlation - Evaluation (Strengths)


  • Correlations are a useful preliminary tool for research. By assessing the strength and direction of a relationship, they provide a precise nd quantifiable measure of how two variables are related. This may suggest ideas for possible future research if variables are strongly related or demonstrate an interesting pattern. Correlations are often used as a starting point to assess possible patterns between variables before researchers commit to an experimental study.
  • Correlations are relatively quick and economical to carry out. There is no need for a controlled environment and no manipulation of variables is required. Data collected by others can be used, which means correlations are less time-consuming than experimenters. 
47 of 68

Correlations - Evaluation (Limitations)

  • As a result of the lack of experimental manipulation and control within a correlation, studies can only tell us how variables are related but not why. Correlations cannot demonstrate cause and effect between variables and therefore we do not know which co-variable is causing the other to change. 
  • It may also be the cause that another untested variable is causing the relationship between the two co-variables we are interested in - an intervening variable, known as the third variable problem. 
  • Largely because of the issues above, correlations can occasionally be misused or misinterpreted. Particularly in the media, relationships betwen variables are sometimes presented as casual 'facts' when in reality they may not be. For instance, an often quoted  statistic is the realtionship between raised in a single-parent family and the increased likelihood of bieng involved in crime. This does not mean, however, that single-parent households cause crime or that children from such familits will inevitable go on to commit crime. There are many intervening or 'third' variables at work here, such as the fact that children from songle-parent families tend to be less well off so this might explaon the link between one-parent families and crime. 
48 of 68

Kinds of data

Qualitative data: Data that is expressed in words and non-numerical. Thus, a transcript from an interview, an extract from a diary or notes recorded within a councelling session would all be classed as qualitative data. Qualitative metods of data collection are those that are concerned with the interpretation of language form, for example, an interview or an unstructured observation. 

Quantitative data: This is data that is expressed numerically. Quantitative data collection techniques usually gather numerical data in the form of individual scores from participants such as the number of words a person was able to recall in a memory experiment. Data is open to being analysed statistically and can be easily converted into graphs, charts, etc. 

Which one is best?

Neither, it depends on the purpose and aims of the research. Also there is significant overlap between the two: researchers collecting quantitative data as part of an experiment may often interview participants as a way of gaining more qualitative insight into their experience of the investigation. Similarly, there are a number of ways in which qualitative infomation can be converted to numerical data.

49 of 68

Qualitative data - Evaluation

Qualitative data:

  • Offers the researcher much more richness of detail than quantitative data. It is much broader in scope and gives the participant/respondant more licence to develop their thoughts, feelings and opinions on a given subject.
  • For this reason, qualitative data tends to have greater external validity than quantitative data; it provides the researcher with a more meaningful insight into the participant's worldview.
  • That said, qualitative data is often different to analyse. It tends not to lead itself to being summarised statistically so that patterns and comparisons within and between data may be hard to identify.
  • As a consequence, conclusions often rely on the subjective interpretations of the researcher and these may be subject to bias, particularly if the researcher has preconceptions about what he/she is expecting to find.
50 of 68

Quantitative data - Evaluation

  • Essentially the criticisms of quantitive data are the opposite of those above: quantitative data is relatively simple to analyse, therefore comparisons between groups can be easily drawn. Also, data in numerical form tends to be more objective and less open to bias.
  • On the other hand, quanitiative data is much narrower in scope and meaning than qualitative data. It thus may fail to represent 'real life'.
51 of 68

Primary or secondary data

Primary data - Information that has already been obtained first hand by the researcher for the purposes of a research project. In psychology, such data is often gathered directly from participant as part of an experiment, self-report or observation. 

Secondary data - Information that has already been collected by someone else and so pre-dates the current research project. In psychology, such data might include the work of other psychologists or government statistics. 

52 of 68

Primary data - Evaluation

  • The main strength of primary data is that it fits the job. Primary data is authentic data obtained from the participants themselves for the purpose of a particular investigation. Questionnaires and interviews, for instance, can be designed in such a way that they specifically target the information that the researcher requires. 
  • To produce primary data, however, requires time and effort on the part of the researcher. Conducting an experiment, for instance, requires considerable planning, preperation and resources, and this is a limitatio when compared with secondary data, which may be accessed within a matter of minutes. 
53 of 68

Secondary data - Evaluation

  • In contrast to primary data above, secondary data may be inexpensive and easily accessed requiring minimal effect. When examining secondary data the researcher may find that the desired information already exists and so there is no need to conduct primary data collection. 
  • The flip side is that there may be substantial variation in the quality and accuracy of secondary data. Information might at firt appear to be valuable and promising but, on further information, may be out-dated or incomplete. The content of the data may not quite match the researcher's needs or objectives. 
54 of 68


Meta analysis - 'Research about reseaerch',  refers to the process of combining results from a number of studies on a particular topic to provide an overall view. This may involve a qualitative review of conclusions and/or a quantitative analysis of the results producing an effect size.

On the plus side, meta-analysis allows us to view data with much more confidence and results can be generalised across much larger populations. 

However, meta-analysis may be prone to pulication bias, sometimes referred to as the file drawer problem. The researcher may not select all relevant studies, choosing to leave out those studies with negative or non-significant results. Therefore, the data from the meta-analysis will be biased because it only represents some of the relevant data and incorrect conclusions are drawn. 

55 of 68

Measures of central tendency

Measures of centeral tendancy: The use of graphs, tables and summary statistics to identify trends and analyse sets of data. 

Mean: The arithmatic avarage calculated by adding up all the values in a set of data and dividing by the number of values there are.

Median: The central value in a set of data when values are arranged from lowest to highest.

Mode: The most frequently occurring value in a set of data.

56 of 68

Measures of dispersion

Range: is a simple calcularion of the spread of scores and is worked out by taking the lowest value from the highest value.

Standard deviartion: this is the single value that tells us how far scores deviate from the mean.

The larger the standard deviation, the greater the dispersion or spread within a set of data. If we are talking about a particular condition within an experiment, a large standard deviation suggests that not all participants were affected by the IV in the same way because the data are quite widely spread. It may be that there are a few anomalous results.

A low standard deviation value reflects the fact that the data are tightly clusterd around the mean, which might imply that all participants responded in a fairly similar way. The standard deviation is much more precise measure of dispersion than the range as it includes all values within the final calculation. However, for this reason - like the mean - it can be distorted by a single extreme value. 

57 of 68

Presentation and display of quantitative data

Summarising data in a table

These are various of representing data; one of these is in the form of a summary table. It is important to note that when tables appear in the results section of a report they are not merely raw scores but have been converted to desriptive statistics. 

Bar charts

Data can be represented visually using a suitable grapical display so the difference in mean values can easily be seen. The most suitable graph in this case is a bar chart. Bar charts are used when data is divided into categories (discrete data).


A type of graph that represents the strength and direction of a relationship between co-variables in a correlational analysis. 

58 of 68


Normal distribution - A symmetrical spread of frequency data that forms a bell-shaped pattern. The mean, median and mode are all located at the highest peak.

Skewed distribution - A spread of frequency data that is not symmetrical, where the data clusters to one end.

A positive skew - A type of distribution in which the long tail in on the positive side of the peak and most of the distributon is concentrated on the left.

Negative skew - A type of distribution in which the long tail is on the negative side of the peak and most of the distribution is concentrated on the right. 

59 of 68

Histograms and line graphs

Histograms - In a histogram the bars touch each other, which shows that data is continuous rather than discrete. 

Line graphs - Line graphs also represent continuous data and usde points connected by lines to show how something changes in value. 

60 of 68

Statistical testing

Provides a way of determining whether hypothesws should be accepted or rejected. In psychology, they tell us whether differences or relationships between variables are statistically significant or have occurred by chance.

The concept of significance

Just because there is a difference in the mean number or words spoken in the two conditions, it is not certain that this is a significant difference. The difference found may have been no more than that which could have occurred by chance, that is, by coincidence or a fluke. To find this out, we need to use a statistical test.

61 of 68

The sign test

A statistical test used to analyse the difference in scores between related items. Data should be nominal or better.

To use the sign test:

  • Test of difference 
  • Need a repeated measures design
  • Nominal data.
62 of 68

The concept of probability

All studies employ a significance level in order to check for significant differences or relationships. The accepted probability in psychology is 0.05. This is the level at which the researchder decided to accept the hypothesis or not. 

If the experimental hypothesis is accepted, this means there is less than 5% probability that the results occurred by chance.

In some circumstances, researchers need to be even more confident that results were not due to chance and so employ a stricter, more stringent significance level (0.01).

63 of 68

The critical value

When the statistical test has been calculated the researcher is left with a number - the calculated value. This needs to be compared with a critical value to decide whether the result is significant or not. The critical values for a sign test are given a table of critical values. 

You need the following information to use the table:

  • Significance level desired 
  • No. of participants
  • Whether the hypothesis is directional or non-directional.

These pieces of information allow you to locate the critical value for your data. For the sign test, the calculated value has to be equal to or lower than the critical value for the result ro be regarded as significant.

64 of 68

The sign test

  • Convert the data to nominal data.
  • From the table add up the + and -.
  • Take the less frequent sign and call this S (calculated value).
  • Compare calculated value with the critical value.

Note that if Ps get the same results they are ignored and the total number of Ps are adjusted. 

65 of 68

The role of peer review

Peer review: the assessment of scientific work by others who are specialists in the same field to ensure that any research intended for publication is of high quality.

The main aims of peer review

  • To allocate research funding. Independent peer evaluation also takes place to decide whether or not to award finding for a proposed research project.
  • To validate the quality and relevance of research. 
  • To suggest ammendments or improvements.
66 of 68

Evaluation of peer review

Whilst the benefits are clear - establishing validity and accuracy of research - certain features are open to criticism.


  • It is usual practice that the 'peer' is anonymous throughout the process as this is likely to produce a more honest appraisal. However, a minority of reviewers may use their anonymity as a way of criticising rival researchers who they percieve as having crossed them in the past. This is likely as they're all in competition for funding.

Publication bias

  • Natural tenancy for editors of journals to want to publish significant findings to increase the credibility and circulation of their publication. They also perfer to publish positive results. This could mean that research that does not reach this criteria is ignored or disregarded. 

Burying ground-breaking research

  • May surpress opposition to mainstream theories, wishing to maintain the status quo within particular scientific fields. Slows the rate of change within a particular scientific discipline. 
67 of 68

Implications for research for the economy

Attachment research into the role of the father

Attachment research has only come a considerable way since Bowlby. Psychological research has shown that both parents are equally capable of providing the emotional support necessarily for healthy psych development, and this understanding may promote more flexible working arrangments within the family. It is now the norm in lots of households that the mother is the higher earner and so works longer hours, whilst many couples share childcare responsibilities across the working week. This means that modern parents are better equipped to maximise their  income and contribute more effectively to the economy.

There are other examples throughout the other topics.

68 of 68




Amazing! So much detail :) Thank youuuu




Similar Psychology resources:

See all Psychology resources »See all Research methods and techniques resources »