Skip to content

v0.16.0

Compare
Choose a tag to compare
@hansent hansent released this 09 Aug 16:48
· 610 commits to main since this release
0e2bc54

❗ In release 0.16.0 we introduced bug impacting workflows and inference_sdk

The mistake was introduced in #565 and fixed in #585 (both by @PawelPeczek-Roboflow 😢 ) and was causing issues with order of results - regarding specific workflows blocks:

  • blocks with Roboflow models, whenever used with batch input (for instance when workflow was run against multiple images, or Dynamic Crop was used) were mismatching order of predictions with respect to order of images
  • the same was true for OpenAI block and GPT-4V block
  • the problem was also introduced into inference_sdk, so whenever client was called with multiple images - results may have been missmatched

🚀 Added

Next bunch of updates for workflows 🥳

⚓ Versioning

From now on, both Execution Engine and workflows blocks are versioned to ensure greater stability across changes we make to improve ecosystem. Each workflow definition now declares version forcing the app to run against specific version of Execution Engine. If denoted version is 1.1.0, then workflow would require Execution Engine >=1.1.0,<2.0.0 and we gain ability to expose concurrently multiple major versions of EE in the library (doing our best to ensure that within a major version we only add features and support everything that was released earlier within the same major). On top of that:

  • block manifest metadata field name now will be understood as name of blocks family with additional tag called version possible to be added; we propose the following naming conventions for block names: namespace/family_name@v1. Thanks to those changes anyone could maintain multiple versions of the same block (appending new implementation to their plugin) ensuring backwards compatibilities on breaking changes
  • each block manifest class may optionally expose class method get_execution_engine_compatibility(...) which would be used while model loading to ensure that selected Execution Engine is capable to run specific block
✋ Example block manifest
class BlockManifest(WorkflowBlockManifest):
    model_config = ConfigDict(
        json_schema_extra={
            "name": "My Block",
            "version": "v1",
            ...
        }
    )
    type: Literal["my_namespace/mu_block@v1"]
    ...

    @classmethod
    def get_execution_engine_compatibility(cls) -> Optional[str]:
        return ">=1.0.0,<2.0.0"

🚨 ⚠️ BREAKING ⚠️ 🚨 Got rid of asyncio in Execution Engine

If you were tired of coroutines performing compute heavy tasks in workflows:

class MyBlock(WorkflowBlock):
    async def run():
        pass

we have great news. We've got rid of asyncio in favour of standard functions and methods which are much more intuitive in our setup. This change is obviously breaking all other steps, but worry not. Here is the example of what needs to be changed - usually you just need to remove async markers, but sometimes unfortunately pieces of asyncio code would need to be recreated.

class MyBlock(WorkflowBlock):
    def run():
        pass

Endpoint to expose workflow definition schema

Thanks to @EmilyGavrilenko (#550) UI would now be able to verify syntax errors in workflows definitions automatically.

Roboflow Dedicated Deployment is closer and closer 😃

Thanks to @PacificDou, inference server is getting ready to support new functionality which has a nickname Dedicated Deployment. Stay tuned to learn more details - we can tell that this is something worth waiting for. You may find some hints in the PR.

🔨 Fixed

🚨 ⚠️ BREAKING ⚠️ 🚨 HTTP client of inference server changes default behaviour

The default value for flag client_downsizing_disabled was changed from False to True in release 0.16.0! For clients using models with input size above 1024x1024, running models on hosted platform it should improve predictions quality (as previous default behaviour was causing that input was downsized and then artificially upsized on the server side with worse image quality). There may be some clients that would like to remain previous settings to potentially improve speed (when internet connection is a bottleneck and large images are submitted despite small model input size).

If you liked the previous behaviour more - simply:

from inference_sdk import InferenceHTTPClient, InferenceConfiguration

client = InferenceHTTPClient(
    "https://detect.roboflow.com",
    api_key="XXX",
).configure(InferenceConfiguration(
    client_downsizing_disabled=False,
))

setuptools were migrated to version above 70.0.0 to mitigate security issue

We've updated rf-clip package to support setuptools>70.0.0 and bumped the version on inference side.

🌱 Changed

🏅 New Contributors

Full Changelog: v0.15.2...v0.16.0