Skip to content

v0.16.3

Compare
Choose a tag to compare
@PawelPeczek-Roboflow PawelPeczek-Roboflow released this 22 Aug 19:23
· 420 commits to main since this release
bbf64e1

🔨 Fixed

🚀 Added

SAM2 extension

While making inference from SAM2 model you may request inference package and inference server to cache prompts and low-resolution masks from your inputs to be re-used later on upon request. You are given two parameters (both in SAM2 request payload and SegmentAnything2.segment_image(...) method:

  • save_logits_to_cache
  • load_logits_from_cache
    which decide how the functionality should work. Saving logits masks to cache will make it possible, to re-use them for consecutive inferences agains the same image. Enabling loading triggers search through cache intended to find the most similar prompt cached for this specific image to retrieve its mask. The mechanism is useful when the same image is segmented multiple times with slightly different sets of prompts - as injecting previous masks in that scenario may lead to better results:
Before After

Please note that this feature is different than cache for image embeddings which speed consecutive requests with the same image up and if you don't wish the feature to be enabled, set DISABLE_SAM2_LOGITS_CACHE=True in your env.

🏅 @probicheaux and @tonylampada added the functionality in #582

Remaining changes

  • @EmilyGavrilenko added Workflow block search metadata to improve UI experience in #588
  • @grzegorz-roboflow added internal parameter for workflows request denoting preview in UI #595
  • @grzegorz-roboflow improved usage tracking extending it to models in #601 and #548
  • workflows equipped with new batch-oriented input - VideoFrameMetadata letting blocks to process videos statefully see #590, #597 more docs will come soon

Full Changelog: v0.16.2...v0.16.3