Rerun the complete analysis. What differences are there? Main Points: a. Descriptive statistics do not change. It is still the same data. No longer is the total error partitioned into between-subjects and within-subjects. Student type remained significant. Condition and the interaction terms remained non-significant. The condition term almost reached significance with its increase in power.
Skip to main content. Which of the following are advantages of a factorial design? By using one sample of subjects to test more than one IV simultaneously, you gain efficiency. By including more than one IV in a single experiment the researcher is able to test for the presence of interactions.
By using one sample of subjects to test more than one IV simultaneously, you gain necessarily power for testing both main effects. Both A and B. Both B and C. In a 2X3 factorial design, how many conditions are there?
Confounding Dependent variable Interaction Nonsignificant All of the above are possible. The error term from the within-subject analysis: MS trXgrp error. None of the above are appropriate. The treatment total is divided in component parts: A, B, and AB. Factor A Factor B The interaction. The error is divided into between-subject and within-subject parts. Predictor variable.
In a factorial design multiple independent effects are tested simultaneously. Each level of one factor is tested in combination with each level of the other s , so the design is orthogonal.
The analysis of variance aims to investigate both the independent and combined effect of each factor on the response variable. The combined effect is investigated by assessing whether there is a significant interaction between the factors. All terms require hypothesis tests. The proliferation of interaction terms increases the risk that some hypothesis test will produce a false positive by chance.
Fortunately, experience says that high order interactions are rare, and the ability to detect interactions is a major advantage of multiple factor ANOVA. Then we find the difference scores between the two distraction effects.
This difference of differences is the interaction effect green column in the table. The mean distraction effects in the no-reward 6 and reward 2.
This difference is the interaction effect. The size of the interaction effect was 3. How can we test whether the interaction effect was likely or unlikely due to chance? Oh look, the interaction was not significant. At least, if we had set our alpha criterion to 0.
We could write up the results like this. One reason for this practice is that the researcher is treating the means as if they are not different because there was an above alpha probability that the observed idfferences were due to chance. If they are not different, then there is no pattern to report.
There are differences in opinion among reasonable and expert statisticians on what should or should not be reported. The mean distraction effect in the no-reward condition was 6 and the mean distraction effect in the reward condition was 2.
Here is what a full write-up of the results could look like. Interim Summary. We went through this exercise to show you how to break up the data into individual comparisons of interest. We will do this in a moment to show you that they give the same results.
We broke up the analysis into three parts. The main effect for distraction, the main effect for reward, and the 2-way interaction between distraction and reward. There you have it. We do essentially the same thing that we did before in the other ANOVAs , and the only new thing is to show how to compute the interaction effect.
In the following sections we use tables to show the calculation of each SS. We use the same example as before with the exception that we are turning this into a between-subjects design.
There are now 5 different subjects in each condition, for a total of 20 subjects. As a result, we remove the subjects column. We calculate the grand mean mean of all of the score. Then, we calculate the differences between each score and the grand mean. We square the difference scores, and sum them up.
We need to compute the SS for the main effect for distraction. We calculate the grand mean mean of all of the scores. Then, we calculate the means for the two distraction conditions.
We find the differences between each distraction condition mean and the grand mean. Then we square the differences and sum them up. These tables are a lot to look at! Notice here, that we first found the grand mean 8. Then we found the mean for all the scores in the no-distraction condition columns A and C , that was All of the difference scores for the no-distraction condition are We also found the mean for the scores in the distraction condition columns B and D , that was 6.
So, all of the difference scores are 6. The grand mean 8. We need to compute the SS for the main effect for reward. Then, we calculate the means for the two reward conditions. We find the differences between each reward condition mean and the grand mean. Now we treat each no-reward score as the mean for the no-reward condition 6.
Then, we treat each reward score as the mean for the reward condition We need to compute the SS for the interaction effect between distraction and reward. How do we calculate the variation explained by the interaction? The heart of the question is something like this. Do the individual means for each of the four conditions do something a little bit different than the group means for both of the independent variables.
For example, consider the overall mean for all of the scores in the no reward group, we found that to be 6. For example, in the no-distraction group, was the mean for column A the no-reward condition in that group also 6. The answer is no, it was 9. How about the distraction group? Was the mean for the reward condition in the distraction group column B 6. No, it was 3.
The mean of 9. If there was no hint of an interaction, we would expect that the means for the reward condition in both levels of the distraction group would be the same, they would both be 6. However, when there is an interaction, the means for the reward group will depend on the levels of the group from another IV. In this case, it looks like there is an interaction because the means are different from 6.
This is extra-variance that is not explained by the mean for the reward condition. We want to capture this extra variance and sum it up. Then we will have measure of the portion of the variance that is due to the interaction between the reward and distraction conditions. What we will do is this. We will find the four condition means. Then we will see how much additional variation they explain beyond the group means for reward and distraction.
To do this we treat each score as the condition mean for that score. Then we subtract the mean for the distraction group, and the mean for the reward group, and then we add the grand mean.
This gives us the unique variation that is due to the interaction. We could also say that we are subtracting each condition mean from the grand mean, and then adding back in the distraction mean and the reward mean, that would amount to the same thing, and perhaps make more sense.
When you look at the following table, we apply this formula to the calculation of each of the differences scores. The last thing we need to find is the SS Error. We can solve for that because we found everything else in this formula:. Even though this textbook meant to explain things in a step by step way, we guess you are tired from watching us work out the 2x2 ANOVA by hand. You and me both, making these tables was a lot of work.
We have already shown you how to compute the SS for error before, so we will not do the full example here. Instead, we solve for SS Error using the numbers we have already obtained.
A quick look through the column Sum Sq shows that we did our work by hand correctly. Congratulations to us! We conducted a between-subjects design, so we did not get to further partition the SS error into a part due to subject variation and a left-over part.
We also gained degrees of freedom in the error term. It turns out with this specific set of data, we find p-values of less than 0. A long couple of weeks and months since we started this course on statistics. We just went through the most complicated things we have done so far. This is a long chapter.
What should we do next? Do you want to do that? It builds character. If we keep doing these by hand, it is not good for us, and it is not you doing them by hand. So, what are the other options. The other options are to work at a slightly higher level. This is what you do in the lab, and what most researchers do.
They use software most of the time to make the computer do the work. Because of this, it is most important that you know what the software is doing. All of these skills are built up over time through the process of analyzing different data sets. No more monster tables of SSes. You are welcome. This will be the very same data that you will analyze in the lab for factorial designs. Do you pay more attention when you are sitting or standing? This was the kind of research question the researchers were asking in the study we will look at.
In fact, the general question and design is very similar to our fake study idea that we used to explain factorial designs in this chapter. This paper asked whether sitting versus standing would influence a measure of selective attention, the ability to ignore distracting information. They used a classic test of selective attention, called the Stroop effect. You may already know what the Stroop effect is.
In a typical Stroop experiment, subjects name the color of words as fast as they can. The trick is that sometimes the color of the word is the same as the name of the word, and sometimes it is not. Here are some examples:.
The task is to name the color, not the word. Congruent trials occur when the color and word match. So, the correct answers for each of the congruent stimuli shown would be to say, red, green, blue and yellow. Incongruent trials occur when the color and word mismatch. The correct answers for each of the incongruent stimuli would be: blue, yellow, red, green.
The Stroop effect is an example of a well-known phenomena. What happens is that people are faster to name the color of the congruent items compared to the color of the incongruent items. This difference incongruent reaction time - congruent reaction time is called the Stroop effect.
Many researchers argue that the Stroop effect measures something about selective attention, the ability to ignore distracting information. In this case, the target information that you need to pay attention to is the color, not the word. For each item, the word is potentially distracting, it is not information that you are supposed to respond to. People who are good at ignoring the distracting words should have small Stroop effects.
People who are bad at ignoring the distracting words should have big Stroop effects. They will not ignore the words, causing them to be relatively fast when the word helps, and relatively slow when the word mismatches.
As a result, they will show a difference in performance between the incongruent and congruent conditions. If we take the size of the Stroop effect as a measure of selective attention, we can then start wondering what sorts of things improve selective attention e. The research question of this study was to ask whether standing up improves selective attention compared to sitting down.
They predicted smaller Stroop effects when people were standing up and doing the task, compared to when they were sitting down and doing the task. The design of the study was a 2x2 repeated-measures design. The first IV was congruency congruent vs incongruent. The second IV was posture sitting vs. The DV was reaction time to name the word. They had subjects perform many individual trials responding to single Stroop stimuli, both congruent and incongruent.
And they had subjects stand up sometimes and do it, and sit-down sometimes and do it. Here is a graph of what they found:. The figure shows the means. We can see that Stroop effects were observed in both the sitting position and the standing position.
In the sitting position, mean congruent RTs were shorter than mean incongruent RTs the red bar is lower than the aqua bar. The same general pattern is observed for the standing position. However, it does look as if the Stroop effect is slightly smaller in the stand condition: the difference between the red and aqua bars is slightly smaller compared to the difference when people were sitting. Remember, the interaction effect tells us whether the congruency effect changes across the levels of the posture manipulation.
We can be very confident that the overall mean difference between congruent and incongruent RTs was not caused by sampling error. What were the overall mean differences between mean RTs in the congruent and incongruent conditions? We would have to look at thos means to find out. The table shows the mean RTs, standard deviation sd , and standard error of the mean for each condition. These means show that there was a Stroop effect.
Mean incongruent RTs were slower larger number in milliseconds than mean congruent RTs. The main effect of congruency is important for establishing that the researchers were able to measure the Stroop effect. However, the main effect of congruency does not say whether the size of the Stroop effect changed between the levels of the posture variable. So, this main effect was not particularly important for answering the specific question posed by the study. Remember, the posture main effect collapses over the means in the congruency condition.
We are not measuring a Stroop effect here. We are measuring a general effect of sitting vs standing on overall reaction time. The table shows that people were a little faster overall when they were standing, compared to when they were sitting. Again, the main effect of posture was not the primary effect of interest. They wanted to know if their selective attention would improve when they stand vs when they sit. They were most interested in whether the size of the Stroop effect difference between incongruent and congruent performance would be smaller when people stand, compared to when they sit.
To answer this question, we need to look at the interaction effect. With this information, and by looking at the figure, we can get a pretty good idea of what this means. This is a pretty small effect in terms of the amount of time reduced, but even though it is small, a difference even this big was not very likely to be due to chance. Based on this research there appears to be some support for the following logic chain.
Fine, what could that mean? Well, if the Stroop effect is an index of selective attention, then it could mean that standing up is one way to improve your ability to selectively focus and ignore distracting information.
The actual size of the benefit is fairly small, so the real-world implications are not that clear. Nevertheless, maybe the next time you lose your keys, you should stand up and look for them, rather than sitting down and not look for them.
We have introduced you to factorial designs, which are simply designs with more than one IV.
0コメント