Coming Soon – Center for Open Science Workshop – Oct 25th

 

Members of the CCDC – Ruoxia Zhao, Emily Westbrook, ReJeana Cary, DeVonna Gatlin, Priti Thakur (kneeling) Zhao Yu, Becca Haley, Niranga Wijesiri and Megan Schmale showing off their beautiful COS tee-shirts

UC Libraries and The Graduate School are pleased to host the Center for Open Science for a workshop on Increasing Openness and Reproducibility in Quantitative Research on October 25, 2017The workshop will cover project documentation, version control, pre-analysis plans and the Open Science Framework.  There will be two sessions of the workshop, one on East campus and one on the West campus.  The event is free and open to all.  To register, visit https://goo.gl/Hf5neh.  Participants are asked to bring their own device for best workshop experience.

 Questions? Please email Amy Koshoffer at ASKDATA@UC.EDU for more information.

 Workshop Information:

 

Date: October 25, 2017

 Session 1

Time: 9am – 12pm

Location: East Campus – Troup Learning Space – MSB G005G

 Session 2

Time: 1:30pm – 4:30pm

Location: West Campus – 480 Langsam Library

 

 

Ohio Supercomputer Center Workshop – Oct 10th

Posted on Behalf of Jane Combs –  combsje@uc.edu.

The Ohio Supercomputer Center will offer two workshops on its resources and how to use them Tuesday, October 10, on both East and West campuses.

IT@UC Research & Development will be hosting the Ohio Supercomputer Center for two workshops on Tuesday, October 10. The morning workshop will provide an introduction to the Ohio Supercomputer Center resources and how to use them. In the afternoon, the workshop will cover Big Data Analytics and Spark.

Register for the workshops HERE

The Ohio Supercomputer Center, headquartered in Columbus, partners with Ohio researchers to develop proposals to funding organizations and is the state’s leading strategic research group.

Continue reading Ohio Supercomputer Center Workshop – Oct 10th

UC Libraries and the Graduate School Host Workshop about the Open Science Framework

The University of Cincinnati Libraries and the Graduate School are pleased to host the Center for Open Science for a workshop on “Increasing Openness and Reproducibility in Quantitative Research” on Wednesday, Oct. 25.  The workshop will cover project documentation, version control, pre-analysis plans and the Open Science Framework.

There will be two duplicate sessions of the workshop, one on the Medical Campus from 9 a.m. to noon and one on the West Campus from 1:30 to 4:30 p.m. The event is free and open to all. To register, visit https://goo.gl/Hf5neh. Participants should bring their own devices for the best workshop experience.

The Open Science Framework (OSF) is an open-source workflow management tool developed by the Center for Open Science. Appropriate for any discipline, OSF enables researchers to manage workflows, share files, view project analytics, and more. Available at osf.uc.edu, OSF for UC is the portal for students, faculty, staff and others to manage project files and documents. There is no cost to use OSF and sign-in is easy. Simply go to osf.uc.edu, click on the sign in button, choose University of Cincinnati, then enter your UC 6+2 Central Login.

Workshop Information:

Date: Oct. 25, 2017

Session 1
Time: 9 a.m.-noon
Location: Medical Campus – Troup Learning Space, Donald C. Harrison Health Sciences Library – MSB G005G

Session 2
Time: 1:30-4:30 p.m.
Location: West Campus – 480 Walter C. Langsam Library

Questions? E-mail Amy Koshoffer, science informationist, at ASKDATA@UC.EDU for more information.

OSF FOR UC is here

The Researcher Services group, an initiative of UC Libraries with the IT@UC R&D Team, is pleased to announce a new tool for research projects: OSF for UC.

There is no cost to use OSF for UC.  OSF, or the Open Science Framework, an open-source workflow tool appropriate for any discipline and developed by the Center for Open Science.

OSF for UCosf.uc.edu — is UC’s portal for students, faculty, staff and others who need to manage project files and documents.  Sign-in is easy – go to osf.uc.edu, sign in, choose University of Cincinnati, then your UC 6+2 Central Login.

Through OSF, project teams can assign collaborators (internal and external to UC) and share project documents at a granular level (only share what you want, with whom you want).  Projects managed through the OSF are private by default.  Any or all parts of a project can be made public as desired or required by grant funders or others. 

Continue reading OSF FOR UC is here

Register Now for the Second Annual UC DATA Day

data dayThe University of Cincinnati Libraries and IT@UC announce the 2nd annual UC DATA Day. Scheduled for Thursday, March 23 from 8:30 a.m. – 3:30 p.m. in TUC 400 ABC (see directions), UC DATA Day 2017 will feature a full schedule of engaging events. All events are free and include lunch and an afternoon reception. The public is welcome.

Registration is now open at bit.ly/UCDataDay. Seats are limited, so register early.  Continue reading Register Now for the Second Annual UC DATA Day

Love Your Data Week Day 5 Rescuing Unloved Data

Today’s LYD post is by Amy Koshoffer, Science Informationist based at the Geology Math and Physics Library with editorial support from Dr. Eric J. Tepe, Assistant Professor of Biology and Curator of the Margaret H. Fulford Herbarium.

It has been sometime since I stepped over the threshold of my old lab in the Care/Crawley Building. Many changes occurred in the interim including a move to another floor of the building. There are times I miss the bench research and the data I created in my time as a senior research assistant. One of my favorite techniques was microscopy and particularly Electron Microscopy (EM). I remember the multitude of samples processed, the long wait for samples to be ready to image and then finally all the amazing images we captured. Processing samples for EM imagining is a long and sometimes challenging technique. The samples need to be dehydrated and then infiltrated with a resin to stabilize the structures and prevent destruction from the electron beam during viewing. You might not know if a sample has been ideally preserved until you get to the imaging lab and begin to examine the sample. But what joy when the images look amazing with crisp detail and no water holes. So much work and resources went into the sample preservation and acquiring images.

I wonder what will happen to that effort in the years and decades to come. Are there others who might want to use the physical samples and digital images in their own work? Did I do what was needed to make sure that someone could reuse all the data created? Continue reading Love Your Data Week Day 5 Rescuing Unloved Data

Love Your Data Week Day 4 – Finding the Right Data

Today’s LYD post is by Don P. Jason III, MLIS, MS, Clinical Informationist based at the Donald C. Harrison Health Sciences Library.

Welcome to Day 4 of “Love Your Data Week!” Whether you’re a student analyzing a data set for a school project or a researcher combining data sets to create new insights, finding the right data is essential! This blog post will list a few places you can look to find free, authoritative and unique data sets. The data sets have be broken down into three categories:  US Government Data Sets, International Data Sets and Google Data Sets.

US Government Data Sets

Data.gov http://data.gov – This web site has an eclectic mix of datasets from criminal justice to climate data.  This government site encourages people to use the data to create web and mobile applications and design data visualizations.

US Census Bureau http://www.census.gov/data.html – This web site provides data on the US population and economy.  Utilizing this site’s data has never been easier thanks to new: API’s, data visualizations, mobile apps and interactive web apps.

Healthdata.gov https://www.healthdata.gov/ – This web site includes US healthcare data.  The site is dedicated to making high value health data more accessible to entrepreneurs, researchers and policy makers.

National Climatic Data Center http://www.ncdc.noaa.gov/data-access/quick-links#loc-clim – This is the world’s largest archive of weather data. It has a robust collection of environmental, meteorological and climate data sets from the US National Climatic Data Center.

Continue reading Love Your Data Week Day 4 – Finding the Right Data

Love Your Data Week Day 3 – Good data examples

Today’s Love Your Data Week’s post is by Tiffany Grant PhD, Interim Assistant Director for Research and Informatics at the Health Sciences Library (HSL) and Research Informationist.

Data, FAIR Data

If asked to define good data, the definitions would run the gamut, as the interpretation of the term will be specific to the types and formats of data typically collected by the individual. However, simply put, good data meets the standard of being of good quality, and data quality generally refers to the ability of data to serve the use it was intended. In short, data quality hinges on the reliability and application efficiency of data. The combination of good data quality and data documentation ensures accurate interpretation and reproducibility. Beyond documentation, a number of federal mandates dictate that data be shared beyond one’s own lab notebook, and in order to ensure proper interpretation and reproducibility of your data, it must be FAIR.

 

 

 

 

Continue reading Love Your Data Week Day 3 – Good data examples

Love Your Data Week Day 2 Documenting, Describing and Defining

Today’s Love Your Data Week’s post is by Tiffany Grant PhD, Interim Assistant Director for Research and Informatics at the Health Sciences Library (HSL) and Research Informationist.

The Big 3 of Data

Documenting, describing and defining your data are the 3 most critical components of good data management and your data legacy. If done properly, documentation ensures accurate interpretation and reproducibility of your data. Additionally, it improves the integrity of the scholarly record by providing a more complete picture of how your research was conducted.

Data Things to Do

  1. Document all file names and formats associated with your project
  2. Describe how your data was derived including a description of any equipment and/or software used in the process
    1. Describe your file naming conventions and folder structures
  3. Define any abbreviations, variables or codes used in your data or your file names/folders

Big 3 Data Basics

Who: Who are the contributors?

What: What kind of data was collected and what analyses were done to generate the data?

Why: Why was the project started, i.e. what questions did you hope to answer?

Where: Where did you get your data (if you aren’t the creator)? What is the physical location of the data?

How: How was your data generated?  

Message of the day

Good documentation tells people they can trust your data by enabling validation, replication, and reuse.