-
Notifications
You must be signed in to change notification settings - Fork 3.7k
Open
Labels
ep:CoreMLissues related to CoreML execution providerissues related to CoreML execution providerep:MIGraphXissues related to AMD MI GraphX execution providerissues related to AMD MI GraphX execution providerfeature requestrequest for unsupported feature or enhancementrequest for unsupported feature or enhancement
Description
Describe the feature request
From my research only TensorRT has proper cache invalidation, once onnxruntime has been updated the outdated caches are causing onnxruntime to crash.
| EP | Auto-invalidation | Version Detection |
|---|---|---|
| TensorRT (ORT 1.20+) | Yes | Runtime |
| MIGraphX | Partial | Compile-time |
| CoreML | No | None |
For now I use this workaround.
import os
import onnxruntime
cache_path = os.path.join('.caches', onnxruntime.get_version_string())
session = onnxruntime.InferenceSession('model.onnx', providers = [('CoreMLExecutionProvider',
{
'ModelCacheDirectory': cache_path
})])Describe scenario use case
I recommend a universal solution to handle caches, it should not be EP specific.
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
ep:CoreMLissues related to CoreML execution providerissues related to CoreML execution providerep:MIGraphXissues related to AMD MI GraphX execution providerissues related to AMD MI GraphX execution providerfeature requestrequest for unsupported feature or enhancementrequest for unsupported feature or enhancement