- Add input feature hitting rate counting for HeteroLR and Hetero SecureBoost
- Hetero Secureboosting communication optimization: communication round is reduced to 1 by letting the host send a pre-computed host node route, which is used for inferencing, to the guest.
- Replace serving-router with a brand new service called serving-proxy, which supports authentication and inference request with HTTP or gRPC
- Decouple FATE-Serving and Eggroll, model is read directly from FATE-Flow
- Fixed a bug that got the remote inference result cache
- Using metrics components and providing monitoring through JMX
- Host supports binding grpc interface with model information and registering it in zookeeper, and supports routing to different instances through model information.
- Guest adds a grpc interface for model binding. It supports model binding service id and registering it in zookeeper. The caller can route to different instances through service id. The service id is specified by fate_flow, which can uniquely represent a model.
- Support indicating partial columns in Onehot Encoder
- Add Online OneHotEncoder transform
- Add Online heterogeneous FeatureBinning transform
- Add heterogeneous SecureBoost Online Inference for binary-class classification,multi-class classfication and regression
- Add service governance, obtain IP and port of all GRPC interfaces through zookeeper
- Support automatically to restore the loaded model when service restarts
- Add online federated modeling pipeline DSL parser for online federated inference
- Add multi-level cache for multi-party inference result
- Add startInferceJob and getInferenceResult interfaces to support the inference process asynchronization
- Normalized inference return code
- Real-time logging of inference summary logs and inferential detail logs
- Improve the loading of the pre and post processing adapter and data access adapter for host
- Dynamic Loading Federated Learning Models.
- Real-time Prediction Using Federated Learning Models.