The Take Away from Psych 4850

I want to start off by thanking you all for reading my blog posts and for taking the time to comment! It was greatly appreciated.

I have to say that coming into this course I was skeptical about whether or not I would learn something or have the potential to get a good grade. When I found out that there were no tests, papers and no final I thought it was a joke. I had it in my head that things would be incredibly easy and that I could sail through without doing much.

Oh how wrong I was.

First of all, keeping on top of not only blogs but comments as well every single weak was no simple task. I literally spent more time preparing my blogs and doing things for this class than any of my other courses. Weakly assignments that make up your whole grade are not easy to complete while attempting to balance everything else.

This class stressed me out.

I found myself moving from wanting to structure my blog in such a way that I would receive a good grade to actually caring about the content and what I was gaining from it. I have learnt so much about my own personal views and values that I did not necessarily know before. My opinions have become more solidified and I found that, heaven forbid, I care about methods.

Before this course I thought methods were simply the “scary math parts” of experiments. Turns out there are so much more to methods than statistics and numbers. The social aspect of things particularly caught my interest and I was pleased to find that there are areas in research that are important to me. I enjoy ethics and I’m not afraid to admit it. As a philosophy major I had only thought about ethics in one way before exploring it in the context of research. I learnt a lot from my topic and came to conclusions that I would never have thought of before.

To be honest (tbh ahaha, sorry), I learnt a lot from all of you as well. Without this course I would have continued to think social media was for one thing only without the potential to be used for something more. In addition, I would still believe every article I read on Facebook! Say it ain’t so…

I also gained an entirely new perspective on personality disorders and have re-discovered my love for forensic psychology (though I am now disappointed in it). I now see the importance of representing different cultures in our research and how the research processes may affect individuals, especially vulnerable ones. I look at ways of testing personalities with a more critical eye whereas before I thought everything was valid and reliable.

I think one of the most important things I learnt was that it is okay to not understand everything. I still lack confidence in reading data and deciphering what it all means. I also still hate statistics. I am not the only person who doesn’t understand these things and I doubt I will ever fully grasp it. At first, I felt incredibly inferior and unintelligent in all areas of research. But once I discovered the different areas of methods I figured out that everyone has their own thing they are passionate about. I do not have to get everything. I can enjoy the things I find interesting but also attempt to learn something about things that confuse me without understanding every little thing about it.

This class showed me that I can learn quite well without writing a test or a paper. That kind of pressure is not necessary for success and it actually hinders my learning. I was able to learn about areas I like but engage with everyone else at the same time. I laughed more in this class than I have in most and I looked forward to coming every week. How insane is that?

Having each of you in this class made it that much more enjoyable. I made new friends and figured out that you are all hilarious; a quality I admire. I can honestly say I will miss having you all in class with me next semester and I doubt I will have as much fun. I mean we played YouTube videos a lot and discussed everything from bitter monkeys to elephants on acid, you just can’t beat that.

Who would have guessed that methods can be fun?

 

 

UNCOVERING INFORMED CONSENT AND ETHICAL ISSUES IS RESEARCH PSYCHOLOGY: SYNTHESIS

To synthesize my topic I will be taking what was discussed in my previous four blogs and combining all of the information together to decipher what I truly think about all of this.

For starters, I believe that when it is possible competent consent should always be gathered before conducting an experiment. As stated before, this can only be done to a certain degree but I still believe it is necessary nonetheless (https://bbodinuleth.wordpress.com/2015/10/24/uncovering-informed-consent-and-ethical-issues-in-psychology-research-part-one/).

High risk experiments are a bit trickier because true informed consent cannot be gathered and results cannot always be predicted. The reasons for this and the difficulties that arise can be found in further detail in Part Three of this blog series: https://bbodinuleth.wordpress.com/2015/11/06/uncovering-informed-consent-and-ethical-issues-in-psychology-research-part-three/ .

The part I have been struggling with is how far is too far? Some draw the line at deception others draw the line when the risks outweigh the benefits. As far as I see it the question being asked when conducting an experiment is just as important as all the formalities involved. If the scientific question is relevant enough and significant enough then I think the benefits almost always outweigh the risk. I believe that sacrifices need to be made in order to further our knowledge base and advance our way in the scientific world. Now I am not saying we can just do whatever we want to people to find answers. Procedures, policies and safe guards need to be put in place to protect participants in these experiments. While the answers we find are important, participant safety is also a huge priority.

In Part Four of this series I discussed the possibility that perhaps the debriefing process is the key to permitting high risk experiments while covering our bases when it comes to participant security (https://bbodinuleth.wordpress.com/2015/11/13/uncovering-informed-consent-and-ethical-issues-in-psychology-research-part-four/). I concluded that sometimes the process of debriefing can cause more harm than good. While this is true, I may have been a bit hasty to tear the debriefing process apart.

I have consistently used Milgram’s experiments as an example to support my thoughts about debriefing. These experiments show an instance where being told the truth about an experiment caused long lasting psychological harms. But the fact of the matter is, without the debriefing process the participants would have been a lot worse off.

I briefly mentioned the problems with damaging the integrity of self last week. It seems to me that experiments where there is a significant risk to one’s integrity of self cause the most harm. This is because people do not cope well when their integrity of self is compromised and the effects can vary from minimal to detrimental. The debriefing process in Milgram’s experiments showed participants what they are capable of and caused them to see themselves in a different light. However, had they not been debriefed their integrity of self would have been severely shattered in an even more devastating way.

What I take from this is that the debriefing process continues to be an important part in conducting experiments. While it may result in lasting negative effects the risk of not debriefing is too high. Regardless of the potential harms participants may be exposed to by being debriefed they are still less at risk than if they never found out the truth.

I think that high risk experiments with an important question should be rendered permissible. The proper precautions should be put in place to minimize risk as much as possible.  Participants should be debriefed afterwards and I think it would be best to have resources available for participants to use if they have any further complications with the experiment. How participants may feel after an experiment is not always predictable. Information found out in the debriefing process or participation in the experiment itself could cause long-term effects. It should be disclosed at the end of debriefing that if anxieties are still present the experimenters can provide resources designed to ease those anxieties such as counseling.

When analyzing what the potential risks could be I think researchers should look at the experiment on a spectrum. The expected/predicted results should be considered along with the extreme results on both ends of the spectrum. With these considerations in place safe guards can be implemented to address all possible, foreseeable risks. Doing this will take more time and money but I think it is necessary to protect participants and minimize the potentially long lasting harms they could face. At the same time, I think it makes high risk experiments stronger and reduces apprehension about the experiments permissibility.

I have made high risk experiments more complicated than I probably should have. However, what I have realized is that no experiment is perfect and sometimes risks become a reality in search of answers to significant questions. The current standard for experiments and the rules in place for safe guards do a decent job of protecting participants without hindering the experiment. With all the complications I brought to the surface I think it is safe to say that the current standard is doing the best job it can. Nothing in research can be perfect and I think that as long as we learn from our past and always keep our priorities in check we will do the best job we can.

Do you agree or disagree sometimes risks have to be taken in order to answer significant questions? Why or why not? What have you learnt from this blog series? Has it changed your point of view or made you realize something you hadn’t before?

References:

Blog series

 

 

 

Uncovering Informed Consent and Ethical Issues in Psychology Research: Part Four

For the final blog before my synthesis I would like to look deeper into high risk experiments and decipher where the difficulties truly lie.

Debriefing is considered very important in research and is done at the end of the experiment. In experiments where informed consent cannot be obtained and/or where deception is present debriefing is of utmost importance. Debriefing allows “removal [of] any misconceptions and anxieties that the participants have about the research and [is designed] to leave them with a sense of dignity, knowledge, and a perception of time not wasted” (Mcleod, 2015). The debriefing process should occur as soon as possible and the purpose of the research explained as fully as possible while allowing the participant to ask any questions (Mcleod, 2015). If deception was part of the experiment it must be disclosed in the debriefing and the reasons for the deception revealed (Mcleod, 2015).

With these rules in mind, perhaps debriefing is the answer to our ethical dilemma. With this process in place, participants are informed of the experiments intentions even if informed consent was not gathered in the beginning. The fullest disclosure is possible once the experiment is complete since nothing can be compromised once the participant is finished.

This might seem very simple but it is not. Disclosing crucial information, especially deception, at the end of an experiment does not allow participants to go back and decide not participate if they so choose. Retrospective consent allows a participant the right to withdraw once the nature of the experiment has been revealed (Uknown Author, 2013). Participants always have the right to withdraw their data at any time during the experiment. But doing so does not undo the fact that they participated in the first place.

The reasons why Milgram’s experiments and the Stanford Prison experiment are now deemed impermissible is based on what we found out after the fact. As I mentioned before, the fact that people listen to what authority figures tell them is no surprise. What was unpredictable about the experiment was how the participants would be affected post-debriefing. Participants in Milgram’s experiments were debriefed once the experiments were complete. They were informed that they did not actually hurt anyone and the nature of the deception was revealed. To be frank, disclosing the truth was actually worse for the participants than being left in the dark. True knowledge of the experiment meant that the participants learnt what they were capable of and they did not like what they saw.

Debriefing on the surface may seem like the best option when trying to decide which high risk experiments are permissible and which are not. However, just because you inform participants after the fact does not mean the risks are evaded. Those who participated in Milgram’s experiments were worse off after being debriefed because their integrity of self was compromised. I will discuss the importance of integrity of self in my synthesis.

For now, I conclude that debriefing is not actually the answer to permissibility in high risk experiments. The information disclosed to participants after the fact may do them more harm than good. If a participant, in retrospect, would not give consent at the beginning of the experiment after learning its true nature then there is a problem. The issue is that you cannot take back what occurs to participants in an experiment once it is over. If they would not consent to doing it after the debriefing then the experiment should be impermissible. The issue with this is that we cannot always predict the outcome of an experiment or what the effects will be on participants whether it is because of the debriefing or not (https://bbodinuleth.wordpress.com/2015/11/06/uncovering-informed-consent-and-ethical-issues-in-psychology-research-part-three/).

Now that I have pointed out the issues in debriefing, the part of experimentation that is supposed to solve problems of consent and deception, ethics regarding high risk experiments have become even more complicated than they already were. If debriefing cannot solve the problems how can we allow experiments that are unable to predict the outcome and effects on participants?

By no means do I think debriefing should be eliminated from experimentation; it is a valuable stage in the research process. I only mean to draw attention to the fact that when it comes to high risk experiments debriefing does not necessarily solve the underlying issues. The problems occur when debriefing itself causes negative effects on participants, like in Milgram’s experiments, or when debriefing does not remove the anxieties taken on by participants during the experiment.

My final stance on high risk experiments and when they should be permissible, based on the blogs before it, will have to wait until the synthesis.

For now, do you think debriefing solves the problems with incompetency, the inability to gather informed consent and high risk deceptive experiments? Why or why not? Do you think the consequences of learning the true nature of some experiments are too risky? What should we do when debriefing does not achieve its purpose in removing misconceptions and anxieties?

References:

McLeod, S. A. (2015). Psychology Research Ethics. Retrieved from http://www.simplypsychology.org/Ethics.html

Author Unknown (2013) Dealing with Ethical Issues – DIP (Deception, Informed Consent, Protection of Participants). Retrieved from http://psychteacheraqa.blogspot.ca/2013/05/dealing-with-ethical-issues-dip.html

Uncovering Informed Consent and Ethical Issues in Psychology Research: Part Three

So far I have established that a competency assessment should be present in research when obtaining informed consent. The level of competency should be equal to the perceived level of risk involved in participating in the experiment.

However, there are instances where individuals cannot provide informed consent either because they are incompetent or because true consent cannot possibly be given.

If competency is needed to provide informed consent what do we do in experiments where the participants are incompetent? When researching the mentally ill the targeted participants do not have the ability to consent due to lack of competency to do so. Some may argue that this renders studies of this nature impermissible since informed consent is so important in research.

I disagree.

Comparing this back to biomedical ethics there are many instances where treatment must be administered immediately, regardless of consent. A paper I read by Hanfried Helmchen discusses patients with dementia, psychotic states, suicidal tendencies, acute strokes, etc. (Helmchen, 2011). In these cases, treatment must be administered immediately even though the patient’s level of competency is reduced. Due to the presence of these instances, the paper goes on to conclude a need for “substituted informed consent, informed assent if possible, a relationship of benefits to risks clearly in favor of benefits, etc.” (Helmchen, 2011).

Helmchen recognizes the “demand for improving the ill condition of these populations, and the [growing conviction] that these populations have the right to participate in research that may yield helpful research results for successful developments against the mental disorder that causes their incompetence” (Helmchen, 2011).

Since these populations cannot be left out of research, a relationship between risks and benefits is important. If the risks to the individual are deemed higher than the benefits many would support disallowing the experiment. But what if the results would benefit the targeted group as a whole despite the risks to those involved? I argue that if an experiment on mentally ill individuals provides information that will provide successful developments regarding their condition it should be permissible. Overall, there is a demand for more information of mental conditions and research has the potential to help us learn more and find ways to help.

I recognize that caution must be taken in regards to the type of risk involved and how long the effects will last. If the risks to the individual can be mitigated as much as possible, and the resources to aid them afterwards are present, then experiments involving incompetent participants are permissible.

The types of risks that are permissible, and those that are not, are consistent across the board. When it comes to high risk experiments involving participants that, even though competent, cannot provide consent the same risk-benefit ratio applies.

In high risk experiments, such as the extremes of Milgram’s experiments or The Stanford Prison experiment mentioned in class, it is not possible to obtain full consent. Participants may be competent but they cannot possibly give consent because (A) they cannot be informed of the level risk so as not to give away the experiment and/or (B) the potential risks or effects cannot be determined or predicted.

The potential effects high risk experiments may have cannot always be known or even anticipated. How can we allow or disallow certain high risk experiments if the risks cannot be determined? Some may argue that experiments should be considered impermissible if the risks cannot be verified and mitigated. However, experiments have been conducted where the effects could not be predicted but were without detrimental results, as have been brought up in class on multiple occasions.

This leads me to believe that we need a tighter grasp on what forms of experiments are permissible and which are not. Some high risk experiments may be rendered permissible and others impermissible depending on the nature of the experiment on a case by case basis. Next week’s blog will address which forms of experimentation take risk-benefit ratios too far and which do not. I argue for a very specific, fine line drawn in regards to high risk experimentation permissibility when competent, informed consent is not possible.

I’ll leave you hanging on what that line is until then.

For now, do you agree or disagree with experimentation on incompetent populations such as the mentally ill? What are your reservations, if any, on using them in experimentation? What types of experimentation, in your opinion, are permissible or impermissible in cases with incompetent individuals?

Reference:

Helmchen, H. (2011). Ethics of Clinical Research with Mentally Ill Persons. Springer-Verlag, 1-12. http://download.springer.com/static/pdf/978/art%253A10.1007%252Fs00406-01102872.pdforiginUrl=http%3A%2F%2Flink.springer.com%2Farticle%2F10.1007%2Fs0040601102872&token2=exp=1446764001~acl=%2Fstatic%2Fpdf%2F978%2Fart%25253A10.1007%25252Fs0040601102872.pdf%3ForiginUrl%3Dhttp%253A%252F%252Flink.springer.com%252Farticle%252F10.1007%252Fs00406-011-02872*~hmac=16b30f18e82cddb52e59551a6b72881025a3e0a598c999e11dc37aac32ec1e2c

Uncovering Informed Consent and Ethical Issues in Psychology Research: Part Two

Last week I discussed the potential for incorporating a competency system in psychology research similar to that in biomedical ethics. I argued that full consent is not acquirable without a level of competency equal to the level of risk involved in the experiment.

A number of issues were brought up in comments such as strong differences between the medical system and psychology research and whether uninformed consent is really a detriment. I will address these issues in this week’s blog. I will also continue the debate on whether true informed consent is necessary in research and whether it is attainable.

The medical system and processes in psychology research are indeed dissimilar. They involve varying levels of risk and differ in their purpose. Yes, treating cancer is very different from conducting a psychology experiment. However, the process the medical system uses to assess competency in patients provides a good example to clarify my argument and that is what it is intended for.

To recap, there are differing levels of risk to participants depending on the experiment being conducted. The same level of ‘informed consent,’ in my opinion, is not sufficient enough to cover all levels of risk. The medical system has three levels of competency to match the different levels of risk (https://bbodinuleth.wordpress.com/2015/10/24/uncovering-informed-consent-and-ethical-issues-in-psychology-research-part-one/).

To me, it makes sense to have a procedure in place to ensure a competency level equal to the nature of the experiment in order to protect participants and confirm they understand the potential risks as much as restrictions allow. Obviously, participants cannot be fully informed or the experiment would not be successful. However, when it comes to more dangerous or controversial experiments some form of assessment should be done to test participant’s competency levels.

I have established that informed consent cannot be obtained without a competency assessment matching the level of risk. But what about participants who are incompetent? In the medical system this includes patients with mental health problems, children, the elderly, etc. ( (Harrison, 2012 ) (Brown, 2012). When it comes to medical procedures the concern of full understanding is apparent in these examples. However, “children who have reached the age of reason (6 years or older), as well as the senile, the mildly retarded, and the intoxicated are considered competent” (Drane, 2012). Regardless of impairment in understanding they are considered competent enough under the first standard but not the second and definitely not the third. In these cases, consent from legal guardians and third parties are legally required (Harrison, 2012 ) (Brown, 2012).

Applying these examples to research there are obviously times where participants are competent, but only to a small degree, or incompetent. If the experiment itself involves a risk factor above the level of competency should it be allowed? Is it detrimental if participants are not fully competent?

I still propose a new competency assessment in research that addresses participants understanding, to the extent the experiment allows, matching the level of potential risk. But there are problems if informed consent cannot be obtained, as my topic points out. But what should be done if informed consent cannot be gathered?

As brought up in a comment from last week, lack of informed consent may not necessarily be detrimental. There are times where informed consent cannot be acquired due to the nature of the experiment and the targeted participants. Because of this, I think there should be two standards involved in psychology research: a competency assessment process and a second protocol to cover those instances where competency cannot provide informed consent.

Some high-risk experiments, such as those mentioned in class, cannot possibly attain informed consent. But that does not mean that they should then be rendered impermissible. As I have argued before, experiments that involve a question beneficial to science should be considered permissible even with high risk to the participant to a certain degree (https://bbodinuleth.wordpress.com/2015/10/10/brilliance-of-deception/).

The question then becomes how far is too far? What should be done in cases where deception is the main component of the experiment? Experiments that are likely to have long lasting effects on participants are very controversial; participants are unable to provide informed consent to the risks they face in participating.

My next blog will tackle the issues involved in high-risk experiments, particularly those involving deception. I will also outline what I believe is taking things too far and where the lines should be drawn in experiments where informed consent cannot be obtained due to the inability to have a competency level that matches the risks.

For now, what do you think should be done to test competency in participants without giving away the nature of the experiment? What should the process look like? Do you think incompetency is an issue? What do you think should be done in cases similar to the third, highest standard of risk and assessment mentioned in the previous blog? Should experiments that are unable to gather informed consent be done? Why or why not?

References:

Brown, B. F. (2012). Proxy Consent for Research on the Incompetent Elderly . In E. Gedge et al, Readings in Health Care Ethics . Broadview Press.

Drane, J. F. (2012). Competency to give Informed Consent: A Model for making Clinical Assessments. In E. Gedge et al, Readings in Health Care Ethics. Broadview Press.

Harrison, C. et al (2012 ). Bioethics for clinicians: Involving Children in medical decisions . In E. Gedge et al, Readings in Health Care Ethics . Broadview Press.

Uncovering Informed Consent and Ethical Issues in Psychology Research: Part One

I have decided to discuss certain issues in Ethical standards. Part one of four will address the current standards of consent, some definitions you should be aware of and the relevant problems that exist.

Biomedical ethics are very similar to research ethics and I am going to use medical standards to bring a fresh perspective to psychology research ethics.

In order for research to be conducted informed consent must be acquired from participants. Participants must be informed of why the research is being done, their right to withdraw from the study or decline, potential risks/factors that may occur, research benefits, restrictions on confidentiality, any incentives, and where/who to contact with information regarding the results (APA, 2010).

Informed consent ensures that participants are fully ‘informed’ of the experiment and fully understand what they are consenting to. However, I believe that in order for a participant to provide informed consent they must first be competent in understanding what they are choosing.

In biomedical ethics “competency assessments focus on the patient’s mental capacities, specifically, the mental capacities to make an informed medical decision” (Drane, 2012). These assessments are done to eliminate two errors: (1) the prevention of a competent patient’s involvement in treatment decisions and (2) neglecting to protect an incompetent patient from the negative effects of a bad decision (Drane, 2012). Committing either of these errors is not in the best interest of the patient and violates respect.

There are three standards of competency in medicine: Standard (1) “is the least stringent [and involves] competency to give valid consent to medical conditions that are not dangerous and objectively are in the patient’s best interest (2) [involves] chronic illnesses or treatment that is more dangerous or of less definitive benefit [thus tipping] the risk-benefit balance (3) [is the] most stringent and demanding [involving] very dangerous decisions [where] diagnostic uncertainty is minimal, treatment is effective and death is likely to result from refusal of treatment” (Drane, 2012).

Standard One looks at a patient’s “awareness [and determines that] assent to the rational expectation alone can satisfy the decisional component” (Drane, 2012). Standard Two looks at a patient’s ability to “understand the risks and outcomes of all options and then choose” while Standard Three looks at a patient’s ability to “appreciate the nature and consequences being made critically and rationally” (Drane, 2012).

These measures are taken so that informed, competent decisions are made in regards to a patient’s best interest. Without them, detrimental consequences may result.

When we apply this to research we see a crucial difference: no competency assessment when acquiring ‘informed’ consent. Some may argue that the consequences of an incompetent decision to participate in a study are not as vital as incompetent decisions in the medical field. I argue that they are equally as important. As we have seen throughout class, participants can undergo some pretty extreme and hindering consequences due to lasting effects of study participation.

Should we incorporate a similar procedure such as the three standards of competency in biomedical ethics? Just like with medical decisions there are differing levels of potential risks in experiments and, therefore, different levels of competency required. Without the competency assessment in research ethics, I feel that any ‘informed’ consent acquired is not truly informed. Just because you explain the intent of the experiment and the potential risks does not mean that the participant fully understands the information. Understanding of the information and what it all means is what is required to be fully informed of the process.This requires competency.

I think a similar process to biomedical ethics should be incorporated into research ethics. Both involve differing levels of risk depending on the situation and each level of risk requires a higher level of competency/understanding to provide true informed consent. I trust in the biomedical system when it comes to ethics and it is clear that without competency there is no informed consent. I think research ethics should also ensure understanding of the information as opposed to participants simply trusting in what they are being told as it has relied on previously.

There are issues that arise with this such as what to do when informed consent cannot be acquired; especially in high risk experiments that require higher levels of understanding. In addition, the thought comes to mind as to whether full understanding is even possible in high risk experiments without undermining the study, let alone if it is possible at all. These issues and more will be addressed in next week’s blog – stay tuned!

For now: what are your thoughts on the differences between informed consent in research ethics and biomedical ethics? Is it fair to compare them on the same level? Do you agree or disagree, based on the information above, that informed consent in research ethics is never truly obtained given the definition provided in biomedical ethics?

References:

APA. (2010). Ethical Principles of Psychologists and Code of Conduct. Retrieved from American Psychology Association: http://www.apa.org/ethics/code/index.aspx?item=11#802

Drane, J. F. (2012). Competency to Give Informed Consent: A Model for Making Clinical Assessments. In E. G. al, Readings in Health Care Ethics (pp. 89-92). Toronto : Broadview Press.

Necessary and Sufficient Conditions: Why it’s Important to know the Difference

For this week, I chose to write on conditions that have been introduced to me many times as a philosophy major. While these are not ‘official’ methods in psychology, I feel as if they are embedded in the research.

To start off, I will define necessary and sufficient conditions and the crucial difference you should not forget.

Necessary condition:  “A condition ‘A’ is said to be necessary for a condition ‘B’, if (and only if) the falsity of ‘A’ guarantees (or brings about) the falsity of ‘B’” (Swartz 1997).

“If it is true that if P then Q, then Q is a necessary condition for P” (Peacock, 2013)

Sufficient condition: “A condition ‘A’ is said to be sufficient for a condition ‘B’, if (and only if) the truth of ‘A’ guarantees (or brings about) the truth of ‘B’” (Swartz 1997).

“If it is true that if P then Q, then P is a sufficient condition for Q” (Peacock, 2013)

Some examples will help explain these conditions further:

Example 1: “If a human being is alive, then that human being has air” (Swartz 1997).

In an if-then statement the consequent (Q) is the necessary condition for the antecedent (P). What this means is that human beings having air is a necessary condition for being alive. However, just because something is necessary does not mean that it is sufficient. There are many things considered necessary for a human being to be alive and having air alone is does not guarantee this (Swartz 1997).

Example 2: “Being a human is a sufficient condition for being a mammal” (Peacock, 2013)

If you are human then you must be a mammal; you cannot both be human and not be a mammal. However, many people get this confused and think that being a mammal is a sufficient condition for being human. But being a mammal does not mean you are human and is, thus, not a sufficient condition.

Now, while these may seem like simple concepts they are actually quite important (at least in my opinion). It is very easy to confuse the two conditions and, when it comes to research, mistaking the two could be detrimental. Confusing the two of these, especially on controversial topics, would say two very different things.

Take an article I found on gun control laws, for example. The article is in favor of stricter gun control laws and uses statistics to support it. Their findings show that places with more guns (P) have significantly more homicides (Q) (Williams, 2012). For this reason, they believe stricter gun laws will aid in diminishing their homicide rates. They also state that the “slaughter of innocents (I) happens because we protect guns (R) before children and other human beings” (Williams, 2012).

If we take these two statements and apply them to logic their findings are not necessary conditions.

Statement 1: If ‘P’ then ‘Q’. ‘Q’ is not a necessary condition for ‘P’ because if we eliminate guns we do not necessarily eliminate homicides. Homicide can still occur without the use of a gun.

Statement 2: If ‘R’ then ‘I’. ‘I’ is not a necessary condition for ‘P’ because the slaughter of innocents is not necessarily done through the use of guns. At the same time, just because we protect guns does not then make it permissible to slaughter innocents.

By attempting to find the causal link between the presence of guns and homicide conclusions are being drawn that are not logically true. When used correctly, you cannot find that guns are a necessary condition for homicide, only a sufficient condition. The presence of a gun is a sufficient condition for committing murder but is not necessary to do so. Murder can occur without the presence of a gun and the presence of a gun while sufficient for murder to occur does not determine that it will.

Once again, if we apply this to research, the confusion of these conditions could cause huge problems. Claims can be made that one thing is necessary for another when in reality this is not the case. Stating that a condition is necessary when in fact it is only sufficient makes substantial difference in what the findings mean.

I found it very difficult to find the mention of necessary and sufficient conditions in psychological research. This is because the confusion of these conditions is embedded within the methods used the way the results are presented. They findings may sound good but they are often logically incorrect. I feel that these conditions are important to all disciplines and are being largely overlooked. The importance of these conditions and the proper distinction between the two is imperative to ensuring proper causation.

Can you find examples in research where these conditions are properly used? Can you find some where they mix them up and, thus, compromise their findings?

References:

Swartz, Norman (1997) The Concepts of Necessary and Sufficient Conditions. Simon Fraser University, Webpage Accessed October 15 2015 http://www.sfu.ca/~swartz/conditions1.htm

Peacock, Kent (2013) Basic Symbolic Logic. 2013 pdf accessed October 15 2015 (webpage provided is 2009 version) http://classes.uleth.ca/200803/logi2003a/SymbolicLogicText_2.pdf

Williams, Ray (2012) Gun Control and the Culture of Violence. Psychology Today webpage accessed October 15 2015 https://www.psychologytoday.com/blog/wired-success/201212/gun-control-and-the-culture-violence

Brilliance of Deception

The concept of deception has come up in past blogs and I feel the topic deserves more exploration. This week, I have stumbled across a social psychology experiment that I believe uses deception in an extremely brilliant way.

In this experiment students are required to perform fairly boring and dull tasks. The students are told that others engaging in the experiment have reported finding it really interesting and fun. The experimenter confesses to the student that there are two groups. Group one, the one the student is in, gains minimal information about the study while group two is briefed by a student who has already completed the study and are told how interesting it is. The experimenter admits this because the student hired to talk with the participants before completing the study is not available and they want the current student to give the information to the awaiting participant (Festinger and Carlsmith 1959).

There are three groups involved in this experiment. A control group, a group offered $1 to inform future students and a group offered $20 to do the same thing. All students rated the study as uninteresting except those offered $1 incentive. (Festinger and Carlsmith 1959).

The reasons for this lie in cognitive dissonance, “anxiety that results from simultaneously holding contradictory attitudes, beliefs, or the like” (Online Dictionary 2015).Those with the $20 incentive felt less dissonance since the reward was higher. The less the reward was the harder it was to justify lying and therefore students changed their beliefs to eliminate the dissonance.

This study has many components, to which I only briefly summarized. To fully understand the complexity of the study please check out the following link: http://psychclassics.yorku.ca/Festinger/

I feel as if this study is incredibly brilliant. The results teach us something about why people with conflicting views find reasons to change their belief to eliminate the dissonance. Not only that, the way the study was conducted was very well done. The experiment had many components and was complicated due to the way they chose to deceive the students. The complexity of the study is what makes it admirable. There were multiple steps used to deceive the students that were well executed. The experimenter had to act in such a way as to not draw any suspicion to the true nature of the study as well as the girl hired to be briefed by the participants. There were eleven students whose data had to be removed, of those eleven only five were due to suspicion, leaving 60 students who fell for the deception (Festinger and Carlsmith 1959).

Now, many people view deception as being potentially harmful. They feel that deceiving people is not moral and that the negative feelings participants get from learning they were duped is not enough to justify the use of deception. But, if you think about it, deception is used in every experiment to some extent. Participants can never know the exact, true purpose of the study or they may compromise the results.

In Milgram’s studies deception was at the heart of the experiments. It is a widely held view that the deception was immoral and would not be approved today. However, the results of those experiments provided science with very important information. Because of this, I would approve Milgram’s studies, even today, if I were asked. The running theme here is that deception is acceptable if the question being asked is beneficial to science.

Based on this notion, I feel that deception is perfectly acceptable. Not only that but I think that it can be used to varying degrees. In reference to the study I mentioned above, the results could not have been obtained without deception; the study relies on it. I personally believe that the information gained was important in furthering our understanding of human behavior and therefore rendering the use of deception acceptable. Methods can be used in a straight forward, well conducted way but they can always push experiments above and beyond. While no experiment is perfect, I feel as if this study uses deception brilliantly due to its complexity. Having multiple steps and reliance on people involved to portray the information properly causes various possibilities for error. However, I don’t believe that simple is always better and think this experiment deserves to be thought of as one that did not simply use deception as a method but rather pushed deception to see if it could go further.

Are there any other experiments you know of that use deception in a brilliant way? Do you disagree that particular methods can be used in more impressive ways than others? Why or why not?

References:

Festinger and Carlsmith (1959) Cognitive Consequences of Forced Compliance. Classics in the History of Psychology, first published in Journal of Abnormal and Social Psychology. Webpage http://psychclassics.yorku.ca/Festinger/

Online Dictionary (2015) Definition of Cognitive Dissonance. Webpage http://dictionary.reference.com/browse/cognitive-dissonance

Learning through Imitation

For this weeks blog I have chosen to bring an interesting experiment to your attention. Once again, I have decided to incorporate findings involving animals but this time compared to results using human children as well.

Learning is a very fascinating part of our existence. In this 2003 study done by Whiten and Horner differences were found in the ways in which chimpanzees and humans learn. The experiment went like this:

In experiment 1, 12 chimpanzees were given tasks to complete. The chimpanzees were required to imitate the actions or the researcher to open a box containing a food item. In all conditions the food item was contained in an opaque, plastic wrapper in an opaque tube so that they could not see or smell it. The box was broken up into irrelevant tasks at the top and relevant tasks at the bottom. The researchers also used a ‘two-action’ design, something I had never heard of until now, to demonstrate the importance of imitation. The bolt on top of the box could either be pulled to the left or pushed from the right and the door could either be slid or lifted out of the way. These were divided into two methods, method 1: push the bolt and lift the door, method 2: drag the bolt and slide the door. Subjects were only shown one method. If the chimps performed the method shown more significantly than the one not shown then the ‘two-action’ method would strengthen the argument for imitation. Chimps performed this task with an opaque box and a translucent box. When the bolt was removed, a tool would be placed in the revealed hole. This action is irrelevant because the tool functioned only to hit a barrier. When demonstrators did this using the opaque box, chimps followed the actions exactly to remove the food item. However, once the same was done with the clear box, they skipped the irrelevant steps and only performed the relevant ones to retrieve the food item. Regardless of if the chimps interacted first with the opaque box or the clear box the results were the same. They made significantly more irrelevant actions with the opaque box than the clear box. The exact same procedure was done in experiment 4 but with preschool children instead. The results were quite different than those found with the chimps. Children performed the same as the chimps with the opaque box, but continued to imitate the irrelevant actions with the clear box (Horner et al. 2003).

The researchers explained that the results of the study teach us a lot about learning. They proposed that chimps learn by using causal information. The chimps demonstrate imitation when they observer the task with the opaque box because they see the researcher obtained the desired results with the same actions. However, once they are given the causal information they no longer need to perform the irrelevant tasks and switch to emulation. With the children, they learn by imitation the whole time and do not switch to using emulation. They suggest that the prevalence of imitation in social learning in children may stem from a strong focus on the actions of the demonstrator (Horner et al. 2003)

I really enjoyed reading this study and felt the researchers were very thorough and precise in their work. I thought that the concealment of the food item was a very good way to control for the possibility of non-social learning (Horner et al. 2003). I was particularly interested in the use of the ‘two- action’ design as it is the first time I have read about this design. I found it interesting because it accurately accounted for the fact that they were truly imitating the researcher as opposed to just opening the box any way they could.

The procedure they used was thought through and covered all their basis. I like that they compared the results to children so that we could get results on how both chimps and humans learn. What other studies can you find that demonstrate human children learning through imitation? What methods did they use? Were any of them similar?

** I have posted a video portraying the same experiment done by Horner and Whiten, give it a watch!

References:

Youtube August 27th 2010, Indoctrination – Chimpanzee versus child. Why do children imitate slavishly?

Horner et al. (2003) Causal knowledge and imitation/emulation switching in chimpanzees (Pan troglodytes) and children (Homo sapiens). Webpage. http://www.emory.edu/LIVING_LINKS/publications/articles/Horner_Whiten_2005.pdf

Blind and Double Blind Experiments

During experiments researchers may withhold certain information about the study from participants while remaining fully aware of all the details themselves. This form of experiment is called a blind experiment or single blind experiment (Woo, 2013). This form of experiment was created once researchers became aware of something called participant bias. When participants are aware of the aim of the study they tend to behave differently than they otherwise would. They behave the way they believe the researcher expects them to (Woo, 2013). When this occurs, the results of the experiment are skewed due to the participant’s bias. To control for this type of bias experimenters implement blind experiments to prevent participants from altering the results.

While blind experiments control for participant bias they do not, however, control for experimenter bias or research bias. Experimenter bias occurs when the experimenter consciously or unconsciously influences the result of the experiment. There are many ways for experimenters to influence results including, but not limited to, selection bias (selecting participants that are more likely to yield the desired result), wording and phrasing, the way the room is set up, etc. Experimenters may not be aware that they are bringing a bias to the experiment. How an experimenter interacts with participants, sets up the experiment and analyses it may cause a bias. For example, in a study done by Rosenthal and Fode students were informed either that their rat was ‘maze-bright’ or ‘maze-dull’  and that they were genetically different by the experimenters (Rosenthal, 1963). After watching the rats goes through the maze students with the ‘maze-bright’ rat reported him going through the maze at a faster pace (Rosenthal, 1963). These results were due to the information provided by the experimenters influencing what the participants thought. One way to control for this possibility is to conduct double blind experiments.

A double blind experiment maintains control for participant bias while also controlling for experimenter bias. With this method, experimenters are unaware of which group is the control  being compared to the experimental group. Experimenters are then able to analyze the results as well as both experimental groups without knowledge of which group is which. If the experimenters don’t know which group is the experimental group and which is the control  then they cannot influence their results (Goodwin, 2009).

Given the possibilities for  experimenter bias in single blind experiments I feel they should be eliminated from research entirely. Double blind experiments are way more effective in controlling for biases that single blind experiments cannot. While double blind experiments are not perfect, they are able to control for both participant bias and experimenter bias whereas single blind experiments can only control for participant bias. I believe that in order to ensure more accurate results and to apply these results to science the occurrences of research biases need to be minimized. Experiments that are unable to do this have too much room for error, in my opinion, and therefore should be eliminated as a form of experimentation.

Can you find evidence to invalidate double blind experiments? While they work well to control for participant and experimenter biases are there other research biases that double blind experiments cannot control for? If so is there a way to minimize the chances of these biases occurring? If not, is the possibility for these other biases enough to propose the elimination of double blind experiments as well?

Nadelhoffer, Thomas (2012). Experimental Philosophy and Experimental Bias. Webpage             http://philosophycommons.typepad.com/xphi/2012/12/experimental-philosophy-and-experimenter-bias.html

Woo, Brandon (2013). Participant Bias. Webpage http://futurescienceleaders.org/researchers2012/2013/02/participant-bias/

Goodwin, James (2009). Research in Psychology Methods and Designs. Print. Found on Google Books webpage.  https://books.google.ca/booksid=eNsVUGTMcDoC&pg=PA234&lpg=PA234&dq=example+of+participant+bias+in+psychology+experiments&source=bl&ots=roRDgO1l6k&sig=a2dwdpVyW6JjHysPuEut7342I&hl=en&sa=X&ved=0CD8Q6AEwBmoVChMIuYDny7uTyAIVSZaICh1NrAPf#v=onepage&q=example%20of%20participant%20bias%20in%20psychology%20experiments&f=false

Frye, Tiffany (2003-2015). Double-Blind study: Definition and Explanation. study resource webpage.                                                       http://study.com/academy/lesson/double-blind-study-definition-lessonquiz.html

Rosenthal et al. (1963). Dull Rats and Bright Rats. Webpage http://cranepsych.edublogs.org/files/2009/06/Dull_rats_bright_rats.pdf