Skip navigation and jump directly to page content

 IU Trident Indiana University

UITS Research Technologies

Cyberinfrastructure enabling research and creative activities

Ways we can help

We can support you, comprehensively, with your advanced IT needs, be it data analysis, application development, visualization, data storage, or perhaps a novel way you would like to handle data or simulation. This can be best illustrated by considering the typical research workflow in health care and related sciences. Your own research likely follows a similar workflow although the order of steps may be different. 

ABITC Workflow

Data in a typical research workflows can be described in terms of a data lifecycle as follows:

Data creation → Data storage → Data analysis → Data publishing → Data archival/disposal

How we can help, step-by-step

[Note: We can do all of the following for research data, including clinical research data, in a way that conforms to HIPAA.] 
  • Preliminary investigation: We can support your advanced IT needs during your preliminary investigation for little or no cost.
  • Grant proposals: We can help you with grant proposals
    • through active partnering as co-investigators
    • by providing support letters
    • by providing customized descriptions of how IU's massive research IT resources will help your research.
  • Develop applications:
    • We can help you design and develop applications.
    • We can help you develop and optimize algorithms.
  • Manage data:
    • We can help you plan the best way to manage your data using approaches such as:
      • metadata and tags
      • data provenance
      • ontologies
  • Analyze data:
    • We can store working data and make it visible from computers across the lab, campus, the continent, or the world.
    • We can help you migrate your analysis or other compute-intensive application(s) to our supercomputers, with potentially significant (say 10X) speedups to result.
    • We can accelerate your application further by parallelizing it.
    • We can supply optimized (parallel) versions (if available) of commonly used Life Sciences computing tools such as BLAST, etc.
    • We can help you optimize your workflows.
    • We can mirror publicly available, remote data locally.
  • Visualize data: We can help you use visualization to assist your analysis and/or decision making.
  • Publish data: We can help you present attractive visuals in local and/or national presentations or outreach activities.
  • Databases:
    • We can store your data in our Oracle and MySQL databases
    • We can help you publish the data in our databases to the world via the Web.
  • Archive data:
    • We can provide you with a robust, 24x7 application hosting environment on our servers.
    • We can help you archive data on our massive data storage system for posterity (~decades).
    • We can protect your data against disasters via mirroring between Indianapolis and Bloomington
    • We can archive ALL the data in your lab in a SINGLE location.
  • Share data:
    • We can help you get connected to national biomedical grids such as caBIG.
    • We can help design data dissemination environments.
  • Manage growth:
    • We can help you plan for future growth of your data storage, analysis, and dissemination needs.
    • We can handle projects needs ranging from modest all the way to massive scales.

Example research workflows we can augment

  • Let us assume that you have a research project that deals with clinical radiology data. The steps you take today might be described as follows:
    • Create many, many, large images (as measured in megabytes) using a medical sensor (say PET). (You can replace images with almost any kind of sensor or other data.)
    • Store and analyze them in house, using your own storage, computing systems, and applications that you have written yourself or have purchased.
    • Execute multiple, time comsuming analysis steps that use various internal and external inputs and iterate to an acceptable outcome.
    • Compile the results and archive raw/processed data created in intermediate analysis steps. As well, you want your data to be available to you (or others) for the next 10 years (let's say for regulatory reasons).
    • Publish the results in a paper and make presentations at conferences.
    • Make the data or results available via the Web to the community at large because NIH requires/recommends it.
    You would
    • like to minimize the time between data acquisition and results, but do not currently have the resources in house to make this happen.
    • be willing to consider using external resources to speed up the process *IF* you have assurance of reliability, data security, and help with analysis and data management.
  • Let us assume that you have a set of simulations you run as follows:
    • Molecular dynamics to identify likely candidates that have properties that make them useful in treating some disease.
    • Molecular docking calculations with the disease causing agent to determine if the canadidate molecule will bind effectively.
    • A massive search through national and local databases to identify the right compound.
    This is how you might execute them today.
    • Run these simulations on in-house computers.
    • Obtain results in hours to days.
    You realize that a much faster turaround will allow you to explore areas you cannot today.