Author: carlystrasser

UC Faculty Senate Passes #OA Policy

Big news! I just got this email regarding the new Open Access Policy for the University of California System. I’ll write a full blog post next week but wanted to share this as soon as possible. (emphasis is mine) The Academic Senate of the University of California has passed an Open Access Policy, ensuring that […]

The Data Lineup for #ESA2013

In less than  week, the Ecological Society of America’s 2013 Meeting will commence in Minneapolis, MN. There will be zillions of talks and posters on topics ranging from microbes to biomes, along with special sessions on education, outreach, and citizen science. So why am I going? For starters, I’m a marine ecologist by training, and this […]

It’s Time for Better Project Metrics

I’m involved in lots of projects, based at many institutions, with multiple funders and oodles of people involved. Each of these projects has requirements for reporting metrics that are used to prove the project is successful. Here, I want to argue that many of these metrics are arbitrary, and in some cases misleading. I’m not […]

Software Carpentry and Data Management

About a year ago, I started hearing about Software Carpentry. I wasn’t sure exactly what it was, but I envisioned tech-types showing up at your house with routers, hard drives, and wireless mice to repair whatever software was damaged by careless fumblings. Of course, this is completely wrong. I now know that it is actually […]

Software for Reproducibility Part 2: The Tools

Last week I wrote about the workshop I attended (Workshop on Software Infrastructure for Reproducibility in Science), held in Brooklyn at the new Center for Urban Science and Progress, NYU. This workshop was made possible by the Alfred P. Sloan Foundation and brought together heavy-hitters from the reproducibility world who work on software for workflows. I provided some broad-strokes overviews last […]

Software for Reproducibility

Last week I thought a lot about one of the foundational tenets of science: reproducibility. I attended the Workshop on Software Infrastructure for Reproducibility in Science, held in Brooklyn at the new Center for Urban Science and Progress, NYU. This workshop was made possible by the Alfred P. Sloan Foundation and brought together heavy-hitters from the reproducibility world who […]

Impact Factors: A Broken System

If you are a researcher, you are very familiar with the concept of a journal’s Impact Factor (IF). Basically, it’s a way to grade journal quality. From Wikipedia: The impact factor (IF) of an academic journal is a measure reflecting the average number of citations to recent articles published in the journal. It is frequently used as a proxy for the relative importance […]

Webinar Series on Data Management & DMPTool

One of the services we run at the California Digital Library is the DMPTool – this is an online tool that helps researchers create data management plans by guiding them through a series of prompts based on funder requirements. The tool provides resources and help in the form of links, help text, and suggested answers. […]

Large Facilities & the Data they Produce

Last week I spent three days in the desert, south of Albuquerque, at the NSF Large Facilities Workshop. What are these “large facilities”, you ask? I did too… this was a new world for me, but the workshop ended up being a great learning experience. The NSF has a Large Facilities Office within the Office of […]

Closed Data… Excuses, Excuses

If you are a fan of data sharing, open data, open science, and generally openness in research, you’ve heard them all: excuses for keeping data out of the public domain. If you are NOT a fan of openness, you should be. For both groups (the fans and the haters), I’ve decided to construct a “Frankenstein monster” […]