Child pages
  • Project Conclusion Report

Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

Numbered Headings

A note on collaborating with GitHub and Wiki (minus)

(Guter Einstieg in den Report?)

We start with a brief overview of how we worked in the team of five with one of the team members being 600 km away from the others.

Our main platform for working together on the prototype has been GitHub, a web-based service for software development projects that uses the Git revision control system. Occasionally (like for writing this report) we made use of the hbz's Confluence wiki. We used the tool Huboard that is based on the GitHub API to have an overview over the different tasks and their status. Thus, in the course of this report, several references will thus go to GitHub issues and comments where certain aspects are covered in more detail.

Additionally, we held a project team meeting via Skype once a week.The project team members from hbz of course had the opportunity to also interact directly on site.

Features of the hbz prototype (minus)

Due to the short development time for the prototype the project concentrated on the realisation of an operational service, which allows to: 

  • input data through web forms
  • display data on a world map with basic functionalities
  • search (nach was genau?)

The hbz prototype consists of two applications (Drupal viewing/editing frontend and a JavaScript map) which are themselves based on an application programming interface (API) that enables programmatic interaction with the data. A detailed look at these three parts will follow. But first, the underlying data, its sources and data model will be described.

Data (minus)

Sources (minus)

The prototype is mostly based on data from two different sources:

  • OpenCourseware Consortium (OCWC) member data*:* The people at OCWC were very helpful in providing us with and explaining the OCWC member data. The data was obtained via an API, see GitHub issue #3 for details.
  • Global list of OER initiatives within WSIS knowledge communities: This data can be downloaded (after registering and logging in) as - rather hard to process - comma-seperated values (CSV) at http://www.wsis-community.org/pg/directory/export/672996.

Along with this data collected from pre-existing sources, there is also some manually added data.

Data model (minus)

We already noted in our proposal that it wasn't clearly defined what kind of resources an OER world map should cover: (Das klingt so, als sei das ein Fehler der Ausschreibung. Man hätte aber natürlich auch mehr Gewicht auf die Entwicklung eben solcher Use Cases legen können)

Mentioned in the request for proposals are projects and initiatives as well as people (OER experts). The report summarizing the international conversation that led to this consultation process also mentions journals, repositories and services.

Wiki Markup
{footnote}D'Antoni, S. (2013): A world map of Open Educational Resources initiatives: Can the global OER community design and build it together? Summary report of an international conversation: 12–30 November 2012, p. 6. URL: https://oerknowledgecloud.org/sites/oerknowledgecloud.org/files/OER%20mapping%20discussion%20Summary%20Report%20Final.pdf{footnote}
Of course the information on projects will also contain information on institutions the project is run by. There are some more resources that would be valuable to also have information about, like events (congresses, workshops, hackdays etc.) as well as publications and (software) tools in the OER world or job offers.

Thus, a first and important work on the prototype - as with any software that creates data - was defining the data model for the OER world map data. Of course, this data model should take into account the actual information that is provided in the source data from OCWC and WSIS.

Here is what we came up with:

As the team is working with linked data, it was clear that internally and for providing the data we would use the Resource Description Framework (RDF). To represent data based on this data model we could almost entirely resort to the schema.org vocabulary which has the advantage that the OER data will also be indexed by search engines like Google and Bing. Only three RDF properties and one class had to be taken from other vocabularies. RDF is very flexible and can be extended very easily. Extensions and other adjustments will probably turn out necessary within the next phase of the project.

At the end of the project, the data set the prototype is based on includes information on

  • x organisations, (was soll hier das "x" ?)
  • x persons,
  • x services,
  • x projects.

Application profile (minus)

We put some time into developing an application profile (AP) using RDF for the OER world map. This application profile expresses the application's data model and configures how the data will be viewed in the Drupal editing and presentation environment (see below). In the future, it should also be used as a basis for validating the data input.

The concept of an application profile comes from the Dublin Core Metadata Initiative (DCMI). In short, an application profile is a set of metadata elements, policies, and guidelines defined for a particular application. The elements may be from one or more vocabularies, thus allowing a given application to meet its functional requirements by using metadata from several vocabularies. An application profile is not complete without documentation that defines the policies and best practices appropriate to the application.

Wiki Markup
{footnote}Cf. the [DCMI Glossary|http://dublincore.org/documents/2001/04/12/usageguide/glossary.shtml]. Recently, a ["RDF Application Profile" activity|http://wiki.dublincore.org/index.php/RDF-Application-Profiles] within DCMI was started. The experiences made in the development of the OER world map prototype may serve as valuable input in developing a standard for representing an application profile in RDF.{footnote}

The application profile allows us to have configuration of the Drupal editing and presentation environment and future API validation in one central place. In order to configure the API validation and web site, changes have to be included into the application profile - all connected forms and presentation sites will automatically change accordingly. The AP is also maintained on GitHub and enables relatively easy maintainance of the data presentation and validation without having to directly interact with the front end or API developer (the application profile, in other words, is the means of unambiguous communication between a metadata expert and the developers). This feature accelerates and cheapens the further development of the OER world map.

API (minus)

A central element of the hbz prototype is an application programming interface (API), which support the easy development of web applications on top of the data. The prototype's API can be found at http://lobid.org/oer. Currently, there exist two clients (which are both integrated under oerworldmap.org) that are based on the API: a view and editing environment built with Drupal and a JavaScript map to interact with the data in a map environment. Multiple other applications could also be built on top of the API.

Drupal view and editing environment (minus)

The content management system (CMS) Drupal is used to implement views and editing capabilities for the data provided by the API. Thus, we do not use the relational database that comes with Drupal. Instead, a so called Entity Type was implemented to read/write from/to the API. Additionally, a custom Entity Field Query was implemented to query the API. To demonstrate the use-case of linking to external data, the GeoNames Search Webservice is also available via this component.

In order to load the RDF data provided by our API and GeoNames into Drupal, the built-in RDF capabilities of Drupal were extended to not only output, but also read data in RDF. The mappings of RDF-properties and classes to Drupal fields and bundles are parsed from the application profile.

As mentioned above, we link to geonames GeoNames for countries and cities to demonstrate the approach of adding additional context to our data. These links contain data such as the population which could be used for further visualizations. Furthermore, although we do not use Drupal's database for our actual data, all other capabilities of the CMS can be used, e.g. to define users and roles in an editing workflow.

JavaScript Map (minus)

The JavaScript map is a separate read-only front-end based on Leaflet. Although it is currently embedded in the Drupal site, it communicates directly with the API to fetch its data. Most filtering (i.e. by type and country) is done within the map's JavaScript at this point of the prototype. As the dataset grows, this filtering can be moved to the API to increase performance. As a pragmatic approach, the map's popups reuse the views provided by Drupal. These views can be replaced by custom renderings in the future. To demontrate the advantages of linking to external datasets, country labels from the GeoNames data are provided in the language corresponding to the browser's settings. These multi-lingual labels are not part of the OER world map dataset, but retrieved from GeoNames using the Linked Open Data approach. Any updates to the GeoNames data will thus be reflected in the UI automatically. (Hier könnte noch besser herausgearbeitet werden, dass das ein Vorteil von LOD ist)

Out of scope (tick)

Out of the scope of the project were:

  • aspects of the editorial workflow
  • organizational aspects
  • the business case
  • advanced features of the map design.

Course of the project (minus)

@Jan: hier vielleicht noch Gant Diagramm einfügen und ergänzen, dass wir eher iterativ Vorgehensweise hatten.

Week

Workload

Milestones

Description

Week 01: 10.02.14 - 16.02.14

23,0

 

  • Start definition of application profile (#1)
  • serendipy and OCWC data received , #2 #3
  • Set up GitHub repository

Week 02: 17.02.14 - 23.02.14

69,5

  •  initial version of application profile created

Week 03: 24.02.14 - 02.03.14

50,5

 

  • Set up Play module for OER API #9
  • Access initial sample OER data set in Elasticsearch #10
  • Implement query API (#11)
  • Implement write API (#12)
  • Create formal application profile (#7)

Week 04: 03.03.14 - 09.03.14

56,0

  •  OCWC data has been indexed
  • Convert preliminary initial N-Triples to JSON-LD and index in ES/Indexed initial data (OCWC) (#10)
  • Implement parser to load application profile into Drupal (#4 , #15 , #18)

Week 05: 10.03.14 - 16.03.14

64,5

  • initial version of web formulars available
  • Improve project and test setup (quality assurance) #14 #22
  • Support any RDF serialization in write API #23
  • Support multiple type filters in query API #28
  • Implement Linked Data entity type for Drupal (#19#20)
  • Implement queries for links in web form (#16)

Week 06: 17.03.14 - 23.03.14

98,0

 

  • Implement GeoNames query for locality/country field in web form #26
  • Add geospatial filter to API #34
  • Improve transformation of OCWC data #10
  • Support JSONP requests in API #38
  • Support any RDF serialization in read API #48
  • Use remote context in JSON-LD served by the API #44
  • Implement basic JavaScript-based map (#39)
  • Implement search form in Drupal

Week 07: 24.03.14 - 30.03.14

81,0

  •  Initial version of map interface
  • Finish work on OCWC data transformation #5
  • Several adjustments to the application profile #59#66#64
  • Implement type and country filters in map (#40, #57)
  • Support pagination on API level
  • Support deletion via API and improve API response codes and content types #51#65

Week 08: 31.03.14 - 06.04.14

49,0

  •  decision to include WSIS data
  • Add links between consortia and their members to the data
  • manual editing/refinement of WSIS data

Week 09: 07.04.14 - 13.04.14

90,0

 

  • Transform WSIS data on OER initiatives and add it to API #95
  • Index GeoNames data and allow OER API queries by country names in any language #98
  • Update Elasticsearch backend 1.1.0

Week 10: 14.04.14 - 20.04.14

38,0

  • WSIS data inported
  • project completion report completed
  • finish of project completion report

Total Workload

 

 

 

Lessons Learned (tick)

The project differed in many aspects from usual hbz projects:

  • it was shorter,
  • there was no individual customer that could specify on the requirements if needed,
  • the topic of the project differed from usual library projects hbz is typically involved with,
  • it was the first time we received funding from a non-German institution.

Considering these circumstances, we learned a lot during the project.

Project Management (tick)

  • Since there were few predetermined use cases and since we did not focus on developing use cases our self we neglected to work strict according to defined use cases. This lead to some unprecise formulated tickets and probably cost us some time. For future projects we would take care to define use cases carefully and orient ourselves more strictly on these use cases.
  • Communication with external stakeholders was very interesting, but also time consuming. We did not plan much time for external communication, although this would have been helpful for the definition of use cases as well as for community building. For the creation of the production system we recommend to make sure that enough time is spend on external project communication, since overall success of the OER world map will depend heavily on community acceptance.
  • One consequence of our data-centric approach was, that large parts of our development time was spent on the backend of the system. As a result there was little time left for the refinement of the frontend. If we would have to do it again, we would try to plan more time for additional development iterations of the front end.
  • Not totally unexpected, cooperativeness from the community was very high.
  • Although there is a high awareness of the need for content licenses within the OER community, awareness of the need of licenses for data is less developed. Neither the integrated OCWC data nor the WSIS data was licensed and even we ourselves became aware of this question very late. We recommend to use the CC0 Public Domain Dedication by Creative Commons for all data connected with OER projects.
  • In the beginning of the project, we followed the our standard workflow for quality control and deploying changes on the production system that was established before by the lobid team. Though this approach is appropriate and needed for a production environment it became clear soon that it isn't the best approach for developing a prototype in 10 weeks. Thus, we decided to switch to a simpler and quicker workflow. It took us until the end of the project to get accustomed to it. Lesson learned: Next time we should agree on a clearly defined lightweight workflow for quality control and deploying code changes at the beginning of the project.
  • The due dilligence process of the Hewlett Foundation took longer than we expected, which dampened the positive energy that was generated by the initial acceptance of our proposal. Also this took one week of the planned project time which gave us the feeling of being late from the beginning on. Although it was no problem for us to keep up with the project planning we would recommend the Hewlett Foundation to make the assignment process more transparent in order to avoid irritations on the side of the grantees.

Design (minus)

  • The project confirmed our initial assumption that an API-centric approach is preferrable to other approaches. It even led us to the insight that it is advisable to conceptionally distinguish between two central parts of the system, the data base and the visualisation in form of a map. In the following we will use the terms "OER data hub" for the backend and "OER World Map" for the frontend. Nevertheless our API development was dominated by the needs of the world map, since there were no other applications naming demands to the design of the API. Therefore our goal of building an API, which provides a more general abstraction level on the data has not yet been fully achieved.
  • The project also affirmed our decision in favour of the use of LOD technology. The data included in the OER world map describes basic building blocks the OER community consists of and there should be many useful ways of reusing them (for examples see XX). Thus, the data should be published in a way that assures maximum reusability. @Adrian: Könnte man hier vielleicht noch einen Satz einfügen, dass der OER data hub ein kontrolliertes Vokabular bereitstellt, auf das sich andere Anwendungen beziehen können? Especially with the goal of building an international multilingual service, linked data and the configuration of the data presentation via a central application profile prove to be an ideal approach. The prototype gives a first taste of the advantages. E.g. if you open the http://oerworldmap.org page, country labels to use as filters on the map will appear in the language you have selected in the browser setting as your preferred language. If I chose German, I will see German labels, if I chose English, I will see English ones. This is possible because GeoNames (Link einfügen) - the service the information on countries and cities is linked to - provides the labels of cities and countries in multiple different languages. This made it also very easy to enable search for persons and organiations (Werden Personen und Organisationen auch von GeoNames angeboten?) in one country by using the language of your choice. E.g. if I am from Poland and want to find out what OER initiatives happen in Poland, it might be the case that I use my primary language - Polish - to do the query. Based on the GeoNames data the API will deliver the same results for 'Polska' as if you used 'Poland'.
  • One important function of the OER world map is to give a (hopefully) impressive image of the OER movement at one sight. Such a picture can be used to argue for OER in political and other contexts. In order to do so, questions of aesthetics should not be underestimated. For the development within phase II we would consider cooperating with a specialized design partner to make sure a "high gloss finish" of the world map visualisation is achieved.
  • Da verschiedene relevante Datasets (siehe Creative Commons) vorhanden sind, muss es neben der Eingabe einzelner Sets per Maske auch die Möglichkeit zum Import von ganzen Datenbanken geben.
  • Suchergebnisse und World Map stehen aktuell relativ unverbunden nebeneinander. Hier sollte eine bessere Verzahnung erfolgen, z.B. indem die Suche auch in der Karte möglich ist, oder man aus den Suchergebnissen in die Karte springen kann
  • Very early in the beginning we - contrary to what was reflected in the proposal - decided to include information on services (e.g. OER repositories, search interfaces) in the data. This seemed to make sense as these play a vital role in the OER community and as the OER world map should be of help in discovering open educational resources.

Technical(minus)

@Pascal: hier bitte noch Ergänzungen vornehmen!

@Jan: Nach Ergänzung nochmals glattziehen

  • Was ist mit den Geoinformationen  - haben wir da nicht so einiges gelernt?
  • UUID war ja auch so ein Thema
  • When integrating complete additional datasets there needs to be some editorial quality assurance of the data, which can be quite time intensive. Refinement and transformation of the WSIS Data took us four days. Therefore we would recommend to do at least parts of the tests automatically.
  • Transforming the WSIS data which was obtained in a unfavorable form of csv (comma-seperated values) was a very time-consuming job. (Transforming the OCWC data also took some time but was a lot easier.) It confirmed us once more of the necessity to increase the effort in producing data, e.g. by creating linked data, and to minimize the post-processing effort by third if one wants to maximize reuse of a data set.

Recommendations for further development of an OER World Map (tick)

Refinement of the hbz prototype  (tick)

If the productive OER world map service will be developed based on the prototype created in this project, there need to be several refinements done in order to achieve a reliable production environment.

Refine data model and application profile (minus)

One point is the data model and application profile. A wider discussion on how it should look like to enable enough people to add and maintain the OER world map data seems necessary. We have identified some points that should be discussed:

  • Types of resources described: Does it make sense to add more resource types to be described than the existing four ("organisation", "person", "service", "project")?(Würde ich nicht als Frage formulieren, sondern auf die Konsequenzen hinweisen...)  We could think of additional types like "publication" (for adding OER-related publications in general and specifically publications about a registered project or service), "software", "event", "OER policy" etc. As usual, one should keep in mind that adding new resource types results in a more complex data model and, thus, a more sophisticated editing interface.
  • Mark discontinued services: In order to be a useful resource, it should be possible to only search for services that are actually running. In another context, it might make sense to query for all services that have shut down in 2013. Thus, adding startDate/endDate fields or a operating/discontinued status field for services makes a lot of sense.
  • Add more options for classifying/subject indexing: Especially for services it makes sense to add information on language of the provided material, target group, subject etc. Regarding person, one could add information on languages spoken or type of expertise.
  • Multilingual data input: Enable storing descriptions etc. in different languages and indicating these. This would make it possible to provide the information in the language a web browser asks for or would make it possible for linguistic communities to register their porjects in their own languages in the OER world map.
  • Link projects to organisations they are run within: Currently, you can only link projects to persons working on the project but not directly to organisations a project is run within. It might make sense to add this possibility especially as projects themselves don't have any geo or address information and thus can't be located on a map.
  • Add field "P.O. Box: Currently, you can only add a street address but no P.O. box information (although some P.O. box numbers a stored in the streetAddress field).
  • Was ist mit den Geoinformationen  - gehört das auch hierhin?

Data enrichment (minus)

Already for the existing, quite simple data model, there is some information missing in the data because it didn't exist in the core data and/or we didn't have the time to work on getting the information we need out of the data. For example, though we spent some time on the transformation of the WSIS data on OER initiatives, we only pulled out geographic information for organizations but not for persons. In the future, it would make sense to add the city and country a person is based in either half-automatically or manually.

Automatic data validation based on application profile (minus)

In the prototype, data input isn't validated neither on the client nor the API level. To be sure to have a consistent and, thus, maximally useful data set, it will be important to add validation of incoming data on the API level as well as when indexing transformed data from other data sources (like OCWC or WSIS). It is highly desirable to add an automatic method of validating data based on the application profile so that the application profile would serve as a central standard to decide whether data can be added to the data or must be adapted before indexing. The transformation work would very much benefit from such a process because right now transformation is checked against test files which have to be adjusted seperately when the application profile is changed.

Seperate application profiles for validation and presentation (minus)

Currently, we have the information for validating the data input (what kind of strings are allowed in the "email" field etc.?) alongside with information for presenting the data (in which order should fields be shown and with which labels?) stored in one application profile. It is highly desirable to have seperate documents for these use cases.

Add provenance, administrative metadata and versioning (minus)

Currently, the API only holds the actual data about the different resources. There is no information about where the data comes from, how it was transformed and who did this. Also, there is no information available about when a resource description was added or when it was last modified. In other words, there is no provenance data or other administrative metadata available. As this data is important to assess whether a data entry is valid and up-to-date, it should be added if a productive service will be developed based on the prototype.

Also, it would be really useful to roll back changes made to resource descriptions. Thus, versioning of the records would be desirable.

Improvements in resource presentation & web form (minus)

This paragraph deals with the presentation of information on a resource as well as with the web form for editing this information.

Here is a screenshot of an example of how the information about a resource (here: the organisation "Universidad de Granada") is displayed in the prototype:

We are already quite content with the HTML representations of the information about the different resources (organisation, person, service, project). There are some things that should be adjusted, though, in a production service. And it makes sense in general to experiment with different approaches of presenting the data.

One action item would be to replace the extensive information about a linked resource (in the example for instance the information about "UNIVERSIA" where the organisation is member of) that is shown when clicking on them by less information with the possibility to get more on the respective page of the linked resource. The current approach is especially problematic when the linked resource holds more information than the primary resource.

A desirable and easy to implement feature is to show a organization in a small map instead of indicating the geo coordinates (see current view below).

Currently, the web form reflects quite clearly the data model as defined in the application profile which results in nested boxes that might be difficult to understand by editors. A good example ist the box to add or edit an address:

If the production system is developed based on this prototype, one will have to experiment with other ways of presenting the web forms. For a small group of commited editors, the current web form might be sufficient, though.

Improve world map presentation and interaction possibilities (minus)

This paragraph deals with the presentation of the actual map and the possibilities for interaction it provides.

  • Experiment with different pin forms and colours
  • Fix world map, so that one cannot get lost on world map without entries
  • Use drop down lists for filter options instead of control boxes
  • Order options alphabetically
  • Integrate search directly in world map
  • Connect Search Results with world map ("Click on search result to display it in World map")

General recommendations for collaborative advancement of the project (minus)

Discuss editorial process (minus)

One important question, which was not dealt with during our project is the design of the editorial processes for the OER world map. Generally we would argue that as much effort as possible should be carried by the community in order to save costs. Nevertheless it has to be understood that more detailed and sophisticated data models (oder AP?) inevitably require more time and understanding on the side of the responsible editor. Sometimes it can be difficult to distinguish between an organisation, its services and its projects. For example we found that WSIS classified some entries as "communities". We transformed them into services which are run by an organisation. Sometimes these differenciations can be difficult to decide.

Counteractions to keep community participation high could be:
1. the development of intuitive smart templates, including intelligent autocompletion and validation mechanism.
2. a short and fun to receip

Wiki Markup
\[?\]
explanation of the structure of the used data model, preferable in the form of a short video
3. generation of high additional value of the map for the user. For example we assume that an OER provider will be willing to spend more time on data submission, if this will lead to proofen increase of usage of his materials.

But even if motivation of the community to participate is high we would recommend to integrate some kind of editorial quality control in order to avoid data inconsistencys and to make sure that data is updated regularly. Since we consider it rather unlikely that there will be one organization which commits itself to editing the complete world map data (unless it is paid for this), a decentralized solution probably will be favourable. It would be very helpful to convice organizations which use parts of the data for their own purposes (like the UNESCO/WSIS community or the OCWC) to use the OER world map platform to collect and edit their data. Additionally it might be suitable to appoint national responsibilities in the way that a group of volunteers takes over the responsibility for editing the data of a country.

In each case the platform should be developed to support these processes by defining priority editorial righs. It could also be helpful to include an alert service which makes it possible for users to report outdated data to the editorial team with one click.

Maximize usage and ensure sustainability (minus)

How to attain sustainability seems to be one of the most important questions of the upcoming phase II of the World Map project. Since it will be difficult to define a business model, especially in the short run, it would be very helpfull if the funding of phase II would be extended, so that there will be possibilities to run and refine the system for 2-3 years after its initial development. 

Avoiding redundant collection and editing of the data in different projects would be very helpfull as well in order to reduce overall costs for the OER community. Therefore potentially existing possibilities of cooperation should be investigated carefully. Ideally initiatives like the OCWC, UNESCO/WSIS-Community, or the OER Research Hub should bundle their ressources and use the OER World Map respectively the OER data hub as a common plattform. However our experience shows that such cooperations are difficult to achieve, since there are often slight differences within the needed data models and low willingless for compromises. Since the OER community emphasises cooperation it might be nevertheless possible in this case.

(@Adrian, Felix: Die Brasilianer hatten ja ein eher dezentrales Konzept vorgeschlagen. Falls wir darauf eingehen wollen, wäre hier der Ort dafür:) The hbz OER data hub could also support a decentralized collection of the data, as has been suggested by the brazilian request for proposal. After an initial data mapping, data synchronization could be handled automatically, but every change of the data model would trigger adjustments in the OER data hub.  

Apart from these provisions we would recommend to maximise the additional value of the data to the OER community. Once there is value, it is easyer to generate revenue. One simple way to do so would be to link the data to other relevant data sets, as has been approved by Tim Berner Lee

Wiki Markup
{footnote}Tim Berners-Lee: Linked Data Design Issues, URL: http://www.w3.org/DesignIssues/LinkedData.html and the very nice overview of the 5 star linked open data scheme at http://5stardata.info/.{footnote}
. Opportunities to do so should be manifold:

  • The OER data hub could be used to collect links to repositories which contain OER´s. This links could be used to harvest OER´s and feed them to a OER search engine.
  • OER World Map and Data Hub could also be used to monitor the development of the OER movement by serving as a starting point for future reporting mechanism about the OER movement. Right now there seems to be a painfull gap considering statistical data about the OER movement. For instace UNESCO did not include much information about the OER movement within its Education for all Global Monitoring Report yet, partially because the needed information is hardly available. Such an OER-development monitor could provide answers to questions like following:
    • How many Institutions are engaged in OER?
    • How many ressources are being produced by a special institution/country respectivly within a special field of interest?
    • How many OER projects are actually running? What are the goals of these projects?
    • Which OER policies are used by which institution? Which policies appear to be more effective respective the number of the produced ressources?
  • Quality seems to be a critical aspect within the OER policy debate right now. Many approaches for quality control like rating mechanisms focus on the individual ressources. The OER World Map could support a more institutional focused approach to quality control. For instance institutions could be given the opportunity to qualify as a "Certified quality OER-Producer" through proofing that their production processes follow a defined set of best practises

Conclusion (minus)

  • Project sucessfully demonstrates that hbz can offer a plattform which could be used for a scalable production system of the OER World Map using cutting edge linked open data technology.
  • The project verbindet Open Source, Open Data and Open Educational Resources
  • It is necessary to distinguish between OER World Map and OER data hub
  • OER World Map Data, especially institutions model an important backbone of the OER ecosystem - extending this model by linking it to other ressources will maximise the use of the data, which increases the chance that a OER World Map production system will be sustainable. 

...