Skip to main content

Posts

Showing posts from June, 2018

2D Image Data Correlation Analysis

This post describes a few different methods to quantify how "close" two different 2D images/datasets are to each other. We define a 2D datasets in this case to be: z1(x,y) and z2(x,y) This could be an image in a 2d plane or some other 2d dataset. Obviously, first just look at the images and now one would want to quantify their degree of identicalness. 1. Slope of M1 vs M2 Unzip the 2d array of image and plot them in the array (make sure to unzip the same way). Check for: (a) R2 should be close to 1 (b) The slope should be close to 1 (c) The offset should be close to 0 Make sure the errors are correctly defined for each of the three fits and they can be rather independent of each other. 2. Difference plot Plot the difference of the two images on an image to find if there are any blind spots/something funny. If not, also unzip the 2d array and plot it as a series to look for weird outliers and check if each point is within the series. If yes, look ...

CSV vs HDF5 Time, Size and Shaping

I needed to optimize how we store and manage our data. The data was plain float numbers, so I decided to first check how Python does it with CSV vs saving for HDF5 format. In order to check quickly, I generated random numbers and checked the file size of the stored data as well as the time it took to save them. Results: 1. The type of ordering (Row, Column, Square) didn't matter for CSV or HDF5 data format for time to save as well as the file size. 2. HDF5 performed significantly better in time and constantly better than CSV in size. Somewhere around 10,000 as the number of floating point numbers, things shifted to HDF5, for less than that, CSV appears to do better. 3.  To read back, you can use, h5f = h5py.File('ColH.h5','r') bb = h5f['ColData'][:] h5f.close() Note, there is NO loss of information in HDF5 compression. Plots and Code below. #!/usr/bin/env python3 # -*- coding: utf-8 -*- """ Created on Wed Jun  6...