Exploring the Design Problem Space

Last year I posted my first attempt at mapping the Learning Design problem space arguing a sound understanding is a precursor to any good developed of designed learning. The following quote, taken from a recent paper about teaching architectural design,  I think furthers this argument and captures what could also be the essence of designed learning:

‘Design can be viewed as a mutual learning process among designers (Beguin, 2003) and can be described as a reflective conversation between designers and the designs they create… Traditional architectural studies teaching is based on the notion that successful design solutions and learning are a direct outcome of the extent to which the design problem space is explored’ (p434, my emphasis) 

Wang W., Shih, S. & Chien, S, (2010). A ‘Knowledge Trading Game’ for collaborative design learning in an architectural design studio, International Journal of Technology Design Education, 20, pp433-451.

Advertisements

Authentic Assessment: approaches and practices

The term ‘authentic assessment’ is likely to unfamiliar to many reading this blog, however, it is a concept that Falchikov (2005) observed ‘appears to be increasingly used in further and higher education’. So what explains this discrepancy? Falchikov herself offers one reason, explaining that ‘my own work… has involved my students in all of the activities [I regard as authentic]. However, I have not used the term ‘authentic’ to describe the type of assessment being carried out.’

In some recent research that Denise Whitelock and I have been doing within the OU we have been examining the concept and practice of authentic assessment – and in particular, how to make visible those assessment approaches associated with concept but not understood as such. It’s been great to have the opportunity to explore the term a little and in this post I hope to outline a few of my initial impressions.

The current use of the term emerges from a discourse around ‘authentic’ and ‘genuine’ testing that had become established by the mid-1980s and which combined with the broader social constructivist project to become part of what Serafini considers to be the most recent of three assessment paradigms: ‘assessment for enquiry’.

The notion of ‘authentic’ certainly appealed to those interested in getting closer to ‘real’, ‘meaningful’ learning and represented an idea with an innate capacity to help problematise traditional assessment practice. Early definitions include Wiggins who defines it as ‘[the extent to which] student experience questions and tasks under constraints as they typically and ‘naturally’ occur, with access to the tools that are usually available for solving such problems’, Newmann et al. (1996), and Torrance who suggests ‘[it is the] assessment tasks designed for students should be more practical, realistic and challenging than what one might call ‘traditional’’ and that it is ‘a generic term… to describe a range of new approaches to assessment.’

By 2000, overlapping interpretations of what authentic assessment meant and authentic assessment tasks comprised of were emerging (e.g. McTighe and Wiggins (1999) and the review by Cummings et al.). There were also efforts to instantiate these in to guidance or advice on designing authentic assessment tasks (Darling-Hammond & Snyder; Williams; Hughes) or integrating the idea of authenticity in to principles for instruction (for example, Merrill).

The upshot of this is a range of emphasis and interpretations about what ‘authentic’ means (sometimes in respect to particular disciplines) and what constitutes an ‘authentic’ assessment task. In the itemised paragraphs below I attempt to identify some of the components of the authentic assessment discourse. Read more of this post

Benchmarking Assessment (conference paper): breaking down barriers and building institutional understanding

Denise Whitelock and myself have recnetly been working on a project to identfy key meaures (or criteria) of assessment processes and practice. The aim to to develop a benchmark instrument for use in better assessing and understanding assessment practice in Higher Education.

As part of this work, I am presenting on our behalf a paper at the Computer Assisted Assessment Conference in Southampton this week. In the abstract we make the point that benchmarking offers a comprehensive way of measuring current practice in an institution per se; whilst also gauging achievement against external competitors. It would appear that although e-learning has been benchmarked with a number of universities in the UK and abroad no one to date has tackled the area of assessment; which is now becoming of more concern with the advent of e-assessment. 

Our paper describes the construction of a set of benchmarking measures/indicators and the outcome of early pilots which combine a survey instrument and semi-structured interview methodologies. The findings suggest that building a comprehensive and robust core of benchmark measures would have great utility and value to institutions; not just in external benchmarking but in internal benchmarks and reviews, setting baselines, exploring the student experience, providing staff with data meaningful to their role and professional development and supporting continuous improvement.

The paper is accompanied by our current working draft of the benchamarking measures. [Since writing this post the measures have been revised further. A new version of the measures is available at: Assessment Benchmarking Criteria v17_A]. We will be very interested to hear your comments or feedback on the paper and the draft.

The Lattice model for designing learning: defining the design problem space and guiding the design solutions

Sheila MacNeill recently suggested posting current work or ideas ahead of the JISC Design Bash (taking place later this month in Oxford). Since January I’ve scaled back my time with the OU Learning Design Initiative, however, I am still involved with project managing our JISC funded project. In addition, I have continued to retain my personal interest in how visual conceptualisations and cartographies of learning could benefit the design process, and in how these can enrich, even fundamentally change, the student experience of learning.

In this post I’ll address the first part of this interest – the design process. There are two related concerns; how to set-out and imagine a model of the design problem space (the first step to developing a solution), and how could this be used for a more practical tool to design learning. In this post you will notice that I talk about ‘designing learning’ or ‘designed learning’ rather than ‘learning design’ or ‘Learning Design’ and this is intentional as should become evident. 

The diagram below shows where I am currently in imagining what key dimensions exist in a design problem/solution space and how they link together (click on image to enlarge). I call this the Lattice model because of the inter-relational nature of the design elements. The purpose of laying this out as a network, rather than as a list or linear form, is to make explicit and explore the interconnectedness in designing learning. I would note that this is just a snap shot of a changing model.

The construction of this model has been framed by a number of observations and literatures. I’ll set out a few below but haven’t the space for an exhaustive account:

• Representations of learning designs tend to be concerned more with observable, performed elements of activity but we need to move much further beyond this. The sequence (or swim-lane) visualisation is a good example – a vertical line showing learning tasks with resources, support and sometimes learning outcomes connected to it. This layout was used in an early paper by Oliver and Hetherington in 2001 who showed the ‘three critical elements of learning design environments’ – learning task, learning resources and learning supports – with a basic notational system of rectangles, triangles and circles. Eight years later, these components are still important to learning design – for example Helen Beetham (2009) defines a learning activity as ‘a specific interaction of learner(s) with other(s) using specific tools and resources, orientated towards specific outcomes’ (marked A on the diagram below). Conole’s pedagogy profiler and the OU’s broader project to combine pedagogic and business visualisations of a course are examples of this moving forward with specific representations of aspects of designed and delivered learning. However, it remains uncertain how these descriptions connect together, how they help conceptualise the overall problem/solution space and how far they offer critical understanding. There are still many constraints and drivers to a design are undisclosed. A greater range of dimensions (elements relating to the design) are needed to fully map the design landscape.

• Use of the ‘sociocultural approach’ is an important perspective for Educational psychology in its attempts to theorise the role of culture and society. Although this is certainly not the only theoretical position from which to derive understanding (see later), given its key role any model should aim to accommodate (and yet also push?) this. In doing so we should acknowledge that the designer/teacher is not detached from the design process but implicated at a personal level with it. As the designer is both culturally and historically situated this makes their positionality and ‘intent’ (a term with echoes back to American pragmatism) important. Goodyear talks of the importance of representing intent, Strobel et al (2008) of capturing the design context and my experience at the OU working with Paul Clark and Andrew Brasher in trying to de-construct and visualise existing units revealed how important it is to know the thinking – and evidence supporting that thinking – ’behind’ a design (marked B below). Moving further, there is a need to situate learning as a social act – as Rogoff, for example, holds: learners engage in shared endeavours that are both structured and constituted by cultural and social norms (Rogoff, 1995). However, it difficult to find a language with which to label this dimension/box because traditions in social and cultural theory range widely on how this act could be interpreted and there is now an increasing interlacing between them. For now, I’ve borrowed from Giddens’ structuration theory the notion that there are structural rules and resources and added discourse to this principally as a nod to post-structuralism and hermeneutics (C). This label is therefore vague enough but drives us: to a more nuanced understanding of our students – be this a deeper psychological (Solomon, 2000), social, and cultural (Scheel & Branch, 1993) and the associated opportunity for and means of learning these enable or constrain; and to the intention of the designer and purpose of the activities (and ‘where’ they happen).

• From other design disciplines we learn the importance of first reflecting on and describing the design ‘problem’ space – from which the solution(s) will emerge (i.e. not just racing straight into developing the solution) – see earlier posts. Early IMS Learning Design had little to say about how one actually arrived at the design and whilst patterns outline aspects of the problem, the representation is designed to support someone looking for a solution rather than understanding the problem in the first place.

• The role of assessment in the design needs to be reconsidered – seeing it not as a product but as an activity itself. One option is to understand assessment as a process that ‘acts on’ student output (i.e. an object, action, spoken word, etc.). It would see this output as a resource produced for a specific audience that could be used again later in the learning activity or that could be transformed in to a new artefact/resource (i.e. through the activity of the teacher, student etc.). Irrespective of if, or how, this output is re-used in the learning activity, it will (or should) also constitute the evidence: to demonstrate the learning outcomes/objectives have been achieved (marked D below), to reveal other unanticipated outcomes (after Eisner, Polanyi, etc.) and to support other forms of evaluation (I’ve just jotted down Zakrzewski’s three on the diagram at present).

• There remain many other, often more pragmatic, perspectives to integrate in to the design problem space – thereby reflecting the heterogeneity of educational thought. For example: instructional designs interest in detailing what is to be learnt, learning tasks, student prior learning etc. (marked E); and the belief that a design should be built around key learning or conceptual ‘challenges’ (G). Clearly to appeal to a range of teachers the model should not be restricted to one individual theory of learning. This is in partly why I favour talking about ‘designing learning’ or ‘designed learning’ rather than ‘learning design.’

• Design of a unit of learning is influenced by practical constraints and conditions (H) defined at higher levels e.g. the block or the course (the issue of layers of design and fitting them together has been much discussed and something we’ve looked at in mapping courses), by other ‘evaluation’ demands from the institution or researchers (F), by previous units (for example, prior learning (I)) and by guidelines and training required of staff (J) . The temporal and multi-scale nature of the design problem needs representation (Grey-shaded boxes).

• Visual representation is a powerful means to communicate complex, non-linear, inter-connected relationships. It offers distinct advantages over linear descriptions and can support problem solving performance (for example, Baylor et al., 2005). This is supported by our small-scale studies at the OU (n=45-50) where we have found that a majority of staff said there were aspects of their work that would/do benefits from using visual representation and techniques (81%); they would like to improve their knowledge of visual representation and tools (81%); and that more use of visual representations (that show what is to be learnt and how) could help students better understand and plan their study’ (73%) (Cross et al. 2009)

 

As a practical design tool? 
 
Whilst the model itself can provide a framework for imaging the problem/solution space, of interest to many will be how this model can be translated directly in to a more practical application. The screenshot below shows an early attempt in Excel. Here, each dimension becomes a zone (a box) in which information about the design (be this text, lists or labelled mind-mapped objects) can be inserted. In a typical scenario, the design will evolve and mature as Read more of this post

Conference poster: How do ‘WP’ students differ from others in their engagement with e-learning activities?

Tomorrow evening I’ll be presenting a poster with Rita Tingle at the OU’s Widening Participation Conference. In it we look at evidence from a student survey (n=120) and weblogs of student access to our VLE (n=650,000) for possible differences in use of online course components by Widening Participation (WP) and young students. Click here to view the Poster (PDF 97k)

From the weblogs and survey we identify a gradual fall in use over the course despite the fact that the end of course surveys show no major issues with the technology (and indeed students apparently appreciated much of it). We find that, whilst there is little difference between WP students and other students, students under 25 used online quizzes and optional podcasts less than other students – the graph below is one of six on the poster (the labels ’09B’ and ’08J’ indicate the year in which the course was presented).

We also find from the survey that daily use of a computer for study is highest for 35-46 year olds and use for leisure highest for 25-35 year olds. This is apparently at odds with the discourse (/myth) that ‘Net’ Generation students are more likely to use such technologies and suggests potential for a larger study.

This data represents only part of our broader investigation which covers students’ initial reaction to online components such as course website, study planner, forums, podcasts, videos, quizzes, etc., how their use of them changed, reasons for skipping activities, use of computers and study preferences and practices. Together this may help explain why use falls and for differences such as those we find in this poster. We also aim to ask it there are trends in relation to student educational qualifications and completion/pass rates that are also evidenced in relation to engagement with online course resources.

Exploring spheres of sharing: Analysis of contributions to Cloudworks – Part 2

In my last post I begun an analysis of 250 subscribers to the teaching and learning sharing website Cloudworks – the post presented some headline data relating to size, rate and longevity of contributions. Of course, the next step is to get under the skin of these data to begin unpacking patterns of engagement. To do this, it would be useful to have a representational form capable of showing which Clouds (web-pages) the subscriber contributed to, what they contributed, how much, the time between contributions, and, importantly, how all this fits in to the wider sequence of contributions to these Clouds by others. 

Visualising these patterns should better equip us for interpreting subscriber activity. I’ve come up with a method for representing the contributions made by an individual subscriber (although this should work for representing contributions by one or many to any collection of Clouds). The approach aims to visualise the contributions made to a Cloud in columns running across the chart, and to show the alternating periods of activity (contributions) and inactivity in rows. Symbols represent the contributions made by different groups – in this case by the individual subscriber, members of the team developing the site, and other Cloudworks subscribers.

The following three images present: an example of one the more prolific contributors in my sample; a key to the diagram; and an annotated diagram explaining how to interpret the layout.

There is an interesting pattern of engagement shown in the above example with an intense spell of activity within the first two months (6 periods) although no contributions since. We see how the individual configures the Clouds they create (adding comments and links early on) and can see what the impact and interest in the ones they formed (such as columns 5 & 6) is compared to other Clouds they contributed to (for example, those established by a project team member given in columns 3, 4 & 7). It is also interesting to note that the subscriber often contributes to several Clouds in the same ‘period’ of activity. This may indicate that when they do log-on they look across several of the Clouds they are (or have been) interested in.

The next visualisation (below) shows the contributions made to Cloudworks by another subscriber in the sample. The diagram format certainly supports a quick comparison with others, such as the one above. It shows the Read more of this post

Exploring spheres of sharing: Analysis of contributions to Cloudworks – Part 1

Over the last couple of years new social networking sites have begun to emerge that aim to support the sharing of teaching and learning ideas and experiences and to develop professional networks. These nascent liminal spaces are intriguing for both the social practices they may perpetuate and the new practices they may promote. In this, and I hope some future posts, I aim to look at these emerging spheres of sharing which are more qualitatively complex and quantitatively intricate than web-usage statistics or anecdotal quotes can show, as useful as both these are. What do the patterns of behaviour look like, what is the relationship between the individual and larger contributing group, and how does the act of sharing evolve and differ?

Initially I have chosen as my study group a random sample of 250 subscribers to the Open University’s Cloudworks website and use publicly available data (this represents approximately 10% of those subscribed to the site). The site looks to promote the sharing of learning and teaching ideas and experience, and currently the majority of contributors are from Higher and Further education.  The site has witnessed an impressive rate of subscriber growth given it was only launched last year (for more on the evolution of the site see Conole and Culver (2010) and Galley (2010) references to follow).

First it is perhaps useful for me to differentiate between the practice of ‘sharing’ and the perhaps more nebulous term ‘using’. Sharing implies exchange – the receipt and return of information between two or more people; a process of active dialogue that transcends simply viewing or receiving of information. Cloudworks allows subscribers to make a range of contributions including ‘Clouds’ (a learning and teaching question, topic, issue, idea or experience); comments (on a Cloud object); and links and embedded content (posted to a Cloud) (not all these features were available in early versions of Cloudworks). An analysis such as this provides not only a benchmark to chart further use, but also may help to understand patterns of behaviour and barriers.

The following graph shows the total number of contributions made to the site by each subscriber in the sample. It shows that 61% of the sample have never made a contribution and therefore could be considered never to have ‘shared’ on the site. However, we do find that some 39% of the sample have made at least one contribution (of which 16% made only one post and 15% more than one contribution but all within 28 days of the first post).

 

 

Whilst at this stage this doesn’t tell us about the intent, depth or longevity of contributions or the circumstances under which contributions were made (for example, following from Reychav & Te’eni (2009), does sharing look like that undertaken in formal spaces of conference knowledge sharing or the informal spaces), what we can say is that just over a third have at least experienced contributing to the site.

Measures of longevity of engagement are certainly important for contextualising projections for sustainability. One such measure could be to look at how many have continued to contribute. Analysis of my sample shows 7.6% have made at least one further contribution more than 28 days after their first. What is the implication of this and what does this group look like?

One implication may be that a relatively small group of individuals could contribute a disproportionate amount of content. The graph below shows that just 6% of my sample made over half (56%) of the contributions, and only 15% contributed 80%. Note how I have classified subscribers in to six group based on quantity of contributions.

How does this impact the setting and controlling of discourse and the shared sense of ownership? How much equality in contributions should we expect of such a site?

As part of my initial survey, I scored each subscriber by how much personal information they included in their profile. A ‘0’ was given for no information (i.e. only a first name), a ‘1’ for a surname and place of work and ‘2’ for adding a twitter name, photograph or personal website address. A plot of the six groups against the proportion in that group scoring ‘2’ (sharing more personal details) shows that the more an individual contributed to the site, the more likely they were likely to also share (or indeed have) personal information. The graph is quite striking.

These findings may not perhaps deviate significantly from what would be expected of a social networking website. However, they may take on greater meaning when offered against the ambitions of such sites if they aim to broaden spheres of sharing to the mainstream? Whilst barriers are likely to be social and cultural as much as technological, it is certainly useful to have data with which to benchmark practice and support future goal setting.

This data presents the broad context for the next step, a more detailed look at individual engagements. In my next post, therefore, I intend to look at a method of representing patterns of individual use and situating this in the context of other’s contributions to Clouds.

Reychav, I & Te’eni, D (2009) ‘Knowledge exchange in the shrines of knowledge: The ‘how’s’ and ‘where’s’ of knowledge sharing processes’, Computers and Education, 53, 1266-1277.

Please use the following to reference this post:

Cross, S.J. (2010) ‘Exploring spheres of sharing: Analysis of contributions to Cloudworks – Part 1’, Latestendeavour Blog, weblog post, 7 March <https://latestendeavour.wordpress.com/2010/03/07/exploring-spheres-of-sharing-analysis-of-contributions-to-cloudworks-part-1/>