Friday, October 1, 2010

Why are there so many references to DBR not being accepted?



(Kelly 2004) asks is Design Studies a loose set of methods or a methodology?
Design produces an artifact that outlasts the study and can be "adopted, adapted and used by others"
The object of study is the "process of engagement" between the student and the teacher.
Experiment = testing of hypotheses and conjectures
Design of software involves engineering a broader learning environment
Questions about learning are incorporated into the software – reified, explored, tested and this allows for its use and testing elsewhere
Software (or conceptual framework) can then be used by others – Roschelle 2000 says without it one may not even be able to evoke cognitive learning in the students – interesting concept
He argues that the set of process descriptors that are used to define DR do not enable DR, as yet, to be defined as a method, as they do not define the conceptual strucure and do not compley with the definition of a method namely "a procedure, a process, a set of steps to follow.
This is required in order to be able to meet the claims it makes such as enabling one to "elicit a generative frame- work, advance our understanding of disciplinary knowledge, serve as an incubator for new research techniques, advance our skill in designing learning environments or lead to better instrumentation"

According to Kelly a mature methodology has at least some of following characteristics:
ARGUMENTATIVE GRAMMER
The logic that supports the use of the method and the reasoning about its data (supplies the logos in the methodology). It can be viewed separately from the examplar.
Some questions that can be asked to determine it are:
"What guides the reasoning with these data to make a plausible argument?"
"What is the separable structure that justifies collecting certain data and not others and under what conditions?"
Reviewers may not reject the studies based on the choice of method but due to the violaton of the logos that one would expect to see with that choice of method.

CONTRIBUTIONS TO PROBLEMS OF DEMARCATION OR MEANINGFULNESS
The problem of demarcation is the application of argumentative techniques that distinguish scientific claims from pseudoscience or methaphysical claims (example given is how scientific arguments backed by empirical methods can support astronomy and debunk astrology)
Kelly looked at Collins' description of design studies and asked the following questions that tried to isolate the demarcation of such studies:

-   regardless of the 'richness of descriptions' if there are not experimental controls how can one generalize to other settings (QA – no experimental control, but RC yes)
-   where multiple dependant variables are present (and not controlled), how does one  breakdown the complex interactions in order to determine causal attributions?
-   Where the 'product' is ephemeral ie learning as opposed to an engineering product, and the claims are general does allowing for flexible design revision make sense? But as discussed earlier with regards the software as an artefact – is learning the actual product? An engineered product for transport – a car, the product is the car and WHAT IS THE TRANSPORT? In DR the product is the software or conceptual framework and the transport is learning.???? Look into this further.
-   Is something forgone when the social interaction is valued over social isolation (Collins), does this map to foregoing focus on individual cognition (as Kelly sees it) and can findings thus not be linked it to other areas of science?
-   Kelly asks what is the basis for developing profiles of the learner along a number of dimensions and not others
-   He questions the meaning of hypothesis testing in situations that are fluid, messy (I don’t really understand the grounds for the issue here ??? Look into this further – is it because one cannot keep all other things equal and isolate the other causal effects to isolate the causes of one's hypotheses?)
-    
GENERALIZATIONS OVER ACTORS
As studies focus on a few subjects there is a problem of generalization to a larger N and it weakens the methodology as it lacks sampling and descriptive power.
In addition because the cognitive responses of targeted students are responses to targeted perturbation that are part of the design experiment, Kelly argues that the sampling prolem over actors is compounded as it weaknes the generalization to a larger student body and normal cognitions.
So is this term "generalizations over actors" a standard term with specific meaning in the domain??? (Look into this) and could this be the reason for many scaling up issues that are referred to in the various experiments?

GENERALIZATIONS OVER BEHAVIOURS
Causal claims about behaviour are weakened when there is little or no structural or statistical controls via an experimental control
GENERALIZATIONS OVER CONTEXT
Standard ethnography does not place intervention and iteration centrally – whereas the context in DR requires that the context be engineered, the context is a designed environment. So this could be a reason that the findings may not be generalized to other contexts?
Basic Design Ehtnography describes and interprets a particular culture.
Kelly then describes the "design ethnography" behind QA – which goes a step further to critique and change the social commitments of that culture. These changes and critique emerge as an artifact much as the learning artifact vs the product car in the previous paragraph.??? I need to ponder on this and to see if my understanding is correct.
Kelly points out that the emerging design ethnography methods need to be spelled out and its strengths and weaknesses shown. (Slight difference in how I am explaing this to how it appears in the article – Kelly states "explicitly spell out the methodological strengths and weakness of the emerging design ethnography methods")
PROBLEM OF MEANINGFULNESS
Kelly describes the different viewpoints that exist with regards to the validity of scientific discoveries emerging from pure scientific method vs eureka moments that are the result of processes of thought founded on the imaginative and inspirational. Medawar's quote gives credence to the process whereby the imaginative comes first and it is then followed by the scientific facts and acts.
Design studies could thus be seen as a step BEFORE models estimation and validation ie 'model formulation' ie they generate models and hypothesis
To back up the validity of this process, Kelly quotes, Russel Hulse (Nobel Laureate in physics, 1993) argument that any infinite number of hypothesis could be tested– testing hypotheses in the traditional scientific manner does not, by itself, create useful knowledge, rather (powerful) hypotheses could emerge from studies directed at the problem of meaningfulness and these can then be tested.
So, good hypotheses (questions) advance science, but how can they be generated?
From a literature review? Kelly asks how the question emerges and raises the point that sometimes a literature review actually inhibits progress and quotes the nobel laureate Müller to back this up. Kelly suggests that constructs that are found in literature reviews have emerged from design contexts and it is these studies that may "promote the identification and growth of new ides and constructs.
So does this mean that design studies are placed in a different position on the continuum of scientific methodology? Instead of seeing them as hypothesis testing are they hypothesis generating AND through their iterative processes become hypothesis testing as they advance within their own paradigm. Is this unique to them? Is this an area in which they can be differentiated??? I need to look into this further

GENERALIZATIONS OF CONCEPTUAL FRAMEWORKS OR ARTICUALATION STRATEGIES
Kelly appears to see Design Studies as contributing to ideas for further research. Conceptual frameworks in design studies emerge from the authentic settings and experiences of participants and thus guide further observations, delimit variables to study, contribute to sense-making of the data. Later studies with methodologies that adhere more to "scientific methods" can then extend this to determine generalizations across actors, behaviour and context.
So does this mean that trying to position Design Studies into a Scientific Mould is negating an aspect of it ???? I need to understand this better. Could design studies be split into two different components one hypothesis generating and one hypothesis testing. The hypothesis testing component should adhere to a specified methodology but the criteria the scientific community use to assess the hypothesis generating component and how designers go about this should use another lens to view it? Should I look at VS and RC with a view to differentiating these 2 components?

RESTRICTIONS ON RESEARCHER BIAS
Mature methodologies have guidelines for minimizing the occurrence of inevitable researcher bias.
I need to have a better understanding of the domain to grasp what Kelly is referring to in this section.

Kelly suggests that 'PART OF THE METHODOLOGICAL WORK FOR DESIGN STUDIES IS TO CLARIFY IN WHICH STAGE AND FOR WHAT PURPOSES METHODS ARE APPROPRIATE AND INAPPROPRIATE"

BALANCING CONTINGENT WITH NECESSARY CLAIMS
Contingent = arbitrary vs Necessary
Kelly points out that as Design Studies occur in naturalistic settings the raw materials are contingent, upredicatable and unrepeatable.
Outputs are going to be considered simply descriptive by others until the components can be modelled on what is necessary and thus be considered more scientific. His opinion is that when the contingent is connected to the necessary that is when theory-building occurs.
He refers to 3 examples in McCandliss et al. which I might have to refer to.

In addition he suggests it is necessary to collaborate with other methodologists to develop cross-disciplinary initiatives to the behavioural, cognitive, social sciences and the example of the cognitive neurosciences in order to develop aspects of model building or later stages of research-as-design. For this he makes 2 references Sloane & Gorard and Bannan-Ritland which I am going to consult to see what he means.

PRODUCING USABLE KNOWLEDGE.
Kelly states that Educational researchers have a difficult task in that they need to; answer to a variety of audiences, their claims are difficult to substantiate and they have to include these claims in curriculum to fit into the classrooms.

Educational Research acts on and perturbs the systems in which they work and this adds complexity to the scientific problem. He states that "the laboratory scientist's 'error variance' is the educational researcher's reality.".

Kelly thus suggests that researchers in this paradigm have to exceed the criteria for scientific claims. He asks whether this research can produce learning/teaching artefacts that are efficient, workable and economical and do not require a high-cost switch from current practises. Designers must include in their design (evaluative criteria) factors that will cater with the problems inherent in the later adoption and adaptation.

SUMMARY
It appears that Kelly is saying that Design Research is not accepted as a scientific method but that the reasons why this is so are the reasons inherent in what this Research designs (learning outcomes) and the contexts in which they occur (messy, naturalistic settings). He seems to suggest that by working with other disciplines solutions for these issues could be found. Another factor that could lead to this design being more scientific is by breaking it up into different stages and looking at what is appropriate in these stages (for this he refers to the Zaritsky reading).

No comments:

Post a Comment