Giter Site home page Giter Site logo

Comments (11)

cnzqy1 avatar cnzqy1 commented on July 16, 2024 1

I tried Ext4 and the same issue happened. I didn't check using it in single-process mode. Contents are being written to disk as expected. For now I'm just going to run igneous locally and upload the results to the server.

from igneous.

william-silversmith avatar william-silversmith commented on July 16, 2024

from igneous.

bluehorseshoe1 avatar bluehorseshoe1 commented on July 16, 2024
import sys
import os
from concurrent.futures import ProcessPoolExecutor

import numpy as np
from PIL import Image
from cloudvolume import CloudVolume
from cloudvolume.lib import mkdir, touch

Image.MAX_IMAGE_PIXELS = None

info = CloudVolume.create_new_info(  # 'image' or 'segmentation'
                                     # can pick any popular uint
                                     # other options: 'jpeg', 'compressed_segmentation' (req. uint32 or uint64)
                                     # X,Y,Z values in nanometers
                                     # values X,Y,Z values in voxels
                                     # rechunk of image X,Y,Z in voxels
                                     # X,Y,Z size in voxels
    num_channels=1,
    layer_type='image',
    data_type='uint8',
    encoding='raw',
    resolution=[6, 6, 6],
    voxel_offset=[0, 0, 0],
    chunk_size=[2048, 2048, 1],
    volume_size=[15068, 17500, 13892]
    )


try:
  vol = CloudVolume('file:///data01/output', info=info, compress='gzip')

  vol.commit_info()  # generates gs://bucket/dataset/layer/info json file

  direct = '/data01/200121_B2_final'

  progress_dir = mkdir('progress')  # unlike os.mkdir doesn't crash on prexisting
  done_files = set([int(z) for z in os.listdir(progress_dir)])
  all_files = set(range(vol.bounds.minpt.z, vol.bounds.maxpt.z))

  to_upload = [int(z) for z in list(all_files.difference(done_files))]
  to_upload.sort()
except IOError as err:
  errno, strerror = err.args
  print ('I/O error({0}): {1}'.format(errno, strerror))
  print (err)
except ValueError as ve:
  print ('Could not convert data to an integer.')
  print (ve)
except:
  print ('Unexpected error:', sys.exc_info()[0])
  raise

def process(z):
    try:
      img_name = 'left_resliced%05d.tif' % z
      print('Processing ', img_name)
      image = Image.open(os.path.join(direct, img_name))
      (width, height) = image.size
      array = np.array(list(image.getdata()), dtype=np.uint8, order='F')
      array = array.reshape((1, height, width)).T
      vol[:, :, z] = array
      image.close()
      touch(os.path.join(progress_dir, str(z)))
    except IOError as err:
      errno, strerror = err.args
      print ('I/O error({0}): {1}'.format(errno, strerror))
      print (err)
    except ValueError as ve:
      print ('Could not convert data to an integer.')
      print (ve)
    except:
      print ('Unexpected error:', sys.exc_info()[0])
      raise


with ProcessPoolExecutor(max_workers=4) as executor:
    executor.map(process, to_upload)

from igneous.

bluehorseshoe1 avatar bluehorseshoe1 commented on July 16, 2024

Just for some more info. We are coming from an IT infrastructure prospective assisting a lab with getting this setup on a host. We are kind of learning as we go along. We are running this percomputed_image.py script above. We had a very large EC2 provisioned in AWS (r5dn.8xlarge 32vCpu and 256 GB RAM. We are attempting to run against 13,000 .tif files. We are using 4 max workers.

Our last test ran for about 3 hours before consuming all of the memory and crashing.

from igneous.

bluehorseshoe1 avatar bluehorseshoe1 commented on July 16, 2024

Screen Shot 2020-11-02 at 9 01 32 PM

from igneous.

william-silversmith avatar william-silversmith commented on July 16, 2024

This is pretty weird. It should be using a bit more than 10GB with 4 processes. Does this kind of memory growth happen if you run it like a regular script without the multiprocessing? How far did the process get in terms of slices before crashing? Part of me wonders if there's a dangling reference to the image somewhere.

Another thing you can try is putting the CV initialization inside of process. That should prevent any weird references from persisting. Since you're writing to disk, fetching the info file will be fast. Let me know what happens, if there's a memory leak in CV I'd want to fix it.

from igneous.

cnzqy1 avatar cnzqy1 commented on July 16, 2024

Just to follow up with this issue, running this script locally works fine and it never used more than 24 GB of RAM with 8 processes, as you pointed out.

To get around the issue of this first script I ran it locally to create [8960, 8960, 1] chunks, uploaded them to the server, and then ran the following script to rechunk. I was expecting this to use <64 GB of memory total, but it completely filled out 128 GB of memory and the process became extremely slow and eventually crashed the host server. Similarly, this script works fine locally but doesn't run properly on EC2.

import igneous.task_creation as tc

src_layer_path = 'file://output'
dest_layer_path = 'file://output2'

with LocalTaskQueue(parallel=8) as tq:
  tasks = tc.create_transfer_tasks(
    src_layer_path, dest_layer_path, 
    chunk_size=(64,64,64), skip_downsamples=True, compress='gzip'
  )
  tq.insert_all(tasks)

print("Done!")

from igneous.

william-silversmith avatar william-silversmith commented on July 16, 2024

from igneous.

cnzqy1 avatar cnzqy1 commented on July 16, 2024

The filesystem is an SSD block storage (XFS file system) mounted directly on the EC2 instance. I ran the exact code as above on the server through SSH and using file protocol.

from igneous.

william-silversmith avatar william-silversmith commented on July 16, 2024

This is pretty weird. The exact same codepath is going to be executed in both situations. I use an SSD filesystem on my local machine, but it's MacOS so the major differences would be Linux and XFS. Given we extensively use igneous with Linux, XFS seems to be the odd man out. I don't think I've ever tested with that filesystem.

How does the script perform in single-process mode? As 8 independent processes? Can you check to see if contents are getting written to disk or is the OS buffer filling up until everything explodes?

from igneous.

william-silversmith avatar william-silversmith commented on July 16, 2024

The memory usage is pretty abnormal. I'll keep my eye open for more instances of this. If you end up wanting to debug it, I'll be happy to follow along and provide help.

from igneous.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.