Skip to content

Commit

Permalink
Run pre-commit on files
Browse files Browse the repository at this point in the history
  • Loading branch information
dyastremsky committed Jun 27, 2023

Verified

This commit was created on GitHub.com and signed with GitHub’s verified signature.
1 parent 15a873b commit 4f99899
Showing 31 changed files with 99 additions and 100 deletions.
10 changes: 5 additions & 5 deletions include/triton/core/tritonbackend.h
Original file line number Diff line number Diff line change
@@ -610,7 +610,7 @@ TRITONBACKEND_ResponseFactorySendFlags(
/// response using TRITONBACKEND_ResponseOutput and
/// TRITONBACKEND_OutputBuffer *before* another response is created
/// for the request. For a given response, outputs can be created in
/// any order but they must be created sequentially/sychronously (for
/// any order but they must be created sequentially/synchronously (for
/// example, the backend cannot use multiple threads to simultaneously
/// add multiple outputs to a response).
///
@@ -742,7 +742,7 @@ TRITONBACKEND_DECLSPEC TRITONSERVER_Error* TRITONBACKEND_StateNew(
const int64_t* shape, const uint32_t dims_count);

/// Update the state for the sequence. Calling this function will replace the
/// state stored for this seqeunce in Triton with 'state' provided in the
/// state stored for this sequence in Triton with 'state' provided in the
/// function argument. If this function is called when sequence batching is not
/// enabled or there is no 'states' section in the sequence batching section of
/// the model configuration, this call will return an error. The backend is not
@@ -891,7 +891,7 @@ TRITONBACKEND_BackendSetExecutionPolicy(
/// communicated to Triton as indicated by 'artifact_type'.
///
/// TRITONBACKEND_ARTIFACT_FILESYSTEM: The backend artifacts are
/// made available to Triton via the local filesytem. 'location'
/// made available to Triton via the local filesystem. 'location'
/// returns the full path to the directory containing this
/// backend's artifacts. The returned string is owned by Triton,
/// not the caller, and so should not be modified or freed.
@@ -959,7 +959,7 @@ TRITONBACKEND_DECLSPEC TRITONSERVER_Error* TRITONBACKEND_ModelVersion(
/// communicated to Triton as indicated by 'artifact_type'.
///
/// TRITONBACKEND_ARTIFACT_FILESYSTEM: The model artifacts are made
/// available to Triton via the local filesytem. 'location'
/// available to Triton via the local filesystem. 'location'
/// returns the full path to the directory in the model repository
/// that contains this model's artifacts. The returned string is
/// owned by Triton, not the caller, and so should not be modified
@@ -978,7 +978,7 @@ TRITONBACKEND_DECLSPEC TRITONSERVER_Error* TRITONBACKEND_ModelRepository(
/// the object. The configuration is available via this call even
/// before the model is loaded and so can be used in
/// TRITONBACKEND_ModelInitialize. TRITONSERVER_ServerModelConfig
/// returns equivalent information but is not useable until after the
/// returns equivalent information but is not usable until after the
/// model loads.
///
/// \param model The model.
2 changes: 1 addition & 1 deletion include/triton/core/tritoncache.h
Original file line number Diff line number Diff line change
@@ -198,7 +198,7 @@ TRITONCACHE_DECLSPEC TRITONSERVER_Error* TRITONCACHE_Copy(
/// to load.
///

/// Intialize a new cache object.
/// Initialize a new cache object.
///
/// This function is required to be implemented by the cache.
///
4 changes: 2 additions & 2 deletions include/triton/core/tritonrepoagent.h
Original file line number Diff line number Diff line change
@@ -158,7 +158,7 @@ typedef enum TRITONREPOAGENT_actiontype_enum {
///
/// TRITONREPOAGENT_ARTIFACT_FILESYSTEM: The model artifacts are
/// made available to the agent via the local
/// filesytem. 'location' returns the full path to the directory
/// filesystem. 'location' returns the full path to the directory
/// in the model repository that contains the model's
/// artifacts. The returned location string is owned by Triton,
/// not the caller, and so should not be modified or freed. The
@@ -232,7 +232,7 @@ TRITONREPOAGENT_ModelRepositoryLocationRelease(
/// communicated to Triton as indicated by 'artifact_type'.
///
/// TRITONREPOAGENT_ARTIFACT_FILESYSTEM: The model artifacts are
/// made available to Triton via the local filesytem. 'location' returns
/// made available to Triton via the local filesystem. 'location' returns
/// the full path to the directory. Ownership of the contents of the
/// returned directory are transferred to Triton and the agent should not
/// modified or freed the contents until TRITONREPOAGENT_ModelFinalize.
10 changes: 5 additions & 5 deletions include/triton/core/tritonserver.h
Original file line number Diff line number Diff line change
@@ -1173,7 +1173,7 @@ TRITONSERVER_InferenceRequestAddInput(
///
/// \param inference_request The request object.
/// \param name The name of the input. This name is only used as a reference
/// of the raw input in other Tritonserver APIs. It doesn't assoicate with the
/// of the raw input in other Tritonserver APIs. It doesn't associate with the
/// name used in the model.
/// \return a TRITONSERVER_Error indicating success or failure.
TRITONSERVER_DECLSPEC struct TRITONSERVER_Error*
@@ -1252,7 +1252,7 @@ TRITONSERVER_InferenceRequestAppendInputDataWithHostPolicy(
/// \param inference_request The request object.
/// \param name The name of the input.
/// \param base The base address of the input data.
/// \param buffer_attributes The buffer attrubutes of the input.
/// \param buffer_attributes The buffer attributes of the input.
/// \return a TRITONSERVER_Error indicating success or failure.
TRITONSERVER_DECLSPEC struct TRITONSERVER_Error*
TRITONSERVER_InferenceRequestAppendInputDataWithBufferAttributes(
@@ -1867,7 +1867,7 @@ TRITONSERVER_ServerOptionsSetMinSupportedComputeCapability(
/// Enable or disable exit-on-error in a server options.
///
/// \param options The server options object.
/// \param exit True to enable exiting on intialization error, false
/// \param exit True to enable exiting on initialization error, false
/// to continue.
/// \return a TRITONSERVER_Error indicating success or failure.
TRITONSERVER_DECLSPEC struct TRITONSERVER_Error*
@@ -2157,7 +2157,7 @@ TRITONSERVER_DECLSPEC struct TRITONSERVER_Error* TRITONSERVER_ServerStop(
/// \param server The inference server object.
/// \param repository_path The full path to the model repository.
/// \param name_mapping List of name_mapping parameters. Each mapping has
/// the model directory name as its key, overriden model name as its value.
/// the model directory name as its key, overridden model name as its value.
/// \param model_count Number of mappings provided.
/// \return a TRITONSERVER_Error indicating success or failure.
TRITONSERVER_DECLSPEC struct TRITONSERVER_Error*
@@ -2219,7 +2219,7 @@ TRITONSERVER_ServerModelIsReady(
///
/// - TRITONSERVER_BATCH_UNKNOWN: Triton cannot determine the
/// batching properties of the model. This means that the model
/// does not support batching in any way that is useable by
/// does not support batching in any way that is usable by
/// Triton. The returned 'voidp' value is nullptr.
///
/// - TRITONSERVER_BATCH_FIRST_DIM: The model supports batching
2 changes: 1 addition & 1 deletion src/CMakeLists.txt
Original file line number Diff line number Diff line change
@@ -62,7 +62,7 @@ endif() # TRITON_ENABLE_GPU
#
# Boost
#
# Minimum of 1.78 required for use of boost::span. This can eventually be
# Minimum of 1.78 required for use of boost::span. This can eventually be
# relaxed and replaced with std::span in C++20.
#
find_package(Boost 1.78 REQUIRED)
2 changes: 1 addition & 1 deletion src/backend_manager.cc
Original file line number Diff line number Diff line change
@@ -79,7 +79,7 @@ TritonBackend::Create(
// Backend initialization is optional... The TRITONBACKEND_Backend
// object is this TritonBackend object. We must set set shared
// library path to point to the backend directory in case the
// backend library attempts to load additional shared libaries.
// backend library attempts to load additional shared libraries.
if (local_backend->backend_init_fn_ != nullptr) {
std::unique_ptr<SharedLibrary> slib;
RETURN_IF_ERROR(SharedLibrary::Acquire(&slib));
4 changes: 2 additions & 2 deletions src/backend_model.cc
Original file line number Diff line number Diff line change
@@ -173,7 +173,7 @@ TritonModel::Create(
// TritonModel object.
if (backend->ModelInitFn() != nullptr) {
// We must set set shared library path to point to the backend directory in
// case the backend library attempts to load additional shared libaries.
// case the backend library attempts to load additional shared libraries.
// Currently, the set and reset function is effective only on Windows, so
// there is no need to set path on non-Windows.
// However, parallel model loading will not see any speedup on Windows and
@@ -199,7 +199,7 @@ TritonModel::Create(

RETURN_IF_ERROR(local_model->GetExecutionPolicy(model_config));

// Initalize the custom batching library for the model, if provided.
// Initialize the custom batching library for the model, if provided.
if (model_config.has_sequence_batching()) {
if (model_config.parameters().contains("TRITON_BATCH_STRATEGY_PATH")) {
return Status(
4 changes: 2 additions & 2 deletions src/backend_model_instance.cc
Original file line number Diff line number Diff line change
@@ -355,10 +355,10 @@ TritonModelInstance::CreateInstance(

// Instance initialization is optional... We must set set shared
// library path to point to the backend directory in case the
// backend library attempts to load additional shared libaries.
// backend library attempts to load additional shared libraries.
if (model->Backend()->ModelInstanceInitFn() != nullptr) {
// We must set set shared library path to point to the backend directory in
// case the backend library attempts to load additional shared libaries.
// case the backend library attempts to load additional shared libraries.
// Currently, the set and reset function is effective only on Windows, so
// there is no need to set path on non-Windows.
// However, parallel model loading will not see any speedup on Windows and
2 changes: 1 addition & 1 deletion src/cache_entry.cc
Original file line number Diff line number Diff line change
@@ -124,7 +124,7 @@ CacheEntry::SetBufferSize(InferenceResponse* response)
// 1. First the packed buffer will hold the number of outputs as a uint32_t
packed_response_byte_size += sizeof(uint32_t);
// These sizes will be used to request allocated buffers from the cache
// to copy direcly into
// to copy directly into
for (const auto& output : response->Outputs()) {
uint64_t packed_output_byte_size = 0;
RETURN_IF_ERROR(GetByteSize(output, &packed_output_byte_size));
2 changes: 1 addition & 1 deletion src/cache_entry.h
Original file line number Diff line number Diff line change
@@ -87,7 +87,7 @@ class CacheEntry {
/* Lookup helpers */
Status DeserializeBuffers(boost::span<InferenceResponse*> responses);

// Typically, the cache entry will now own any associted buffers.
// Typically, the cache entry will now own any associated buffers.
// However, if a CacheAllocator wants the entry to own the buffers, this
// can be used to signal that the entry should free its buffers on destruction
void FreeBuffersOnExit() { free_buffers_ = true; }
4 changes: 2 additions & 2 deletions src/ensemble_scheduler.cc
Original file line number Diff line number Diff line change
@@ -632,7 +632,7 @@ EnsembleContext::ResponseComplete(
if (type != TRITONSERVER_PARAMETER_BOOL) {
err = TRITONSERVER_ErrorNew(
TRITONSERVER_ERROR_INVALID_ARG,
"expect paremeter 'sequence_start' to be "
"expect parameter 'sequence_start' to be "
"TRITONSERVER_PARAMETER_BOOL");
} else {
if (*reinterpret_cast<const bool*>(vvalue)) {
@@ -644,7 +644,7 @@ EnsembleContext::ResponseComplete(
if (type != TRITONSERVER_PARAMETER_BOOL) {
err = TRITONSERVER_ErrorNew(
TRITONSERVER_ERROR_INVALID_ARG,
"expect paremeter 'sequence_end' to be "
"expect parameter 'sequence_end' to be "
"TRITONSERVER_PARAMETER_BOOL");
} else {
if (*reinterpret_cast<const bool*>(vvalue)) {
2 changes: 1 addition & 1 deletion src/ensemble_utils.cc
Original file line number Diff line number Diff line change
@@ -100,7 +100,7 @@ ValidateTensorConsistency(

// Shapes must match or either one uses variable size shape, if one uses
// variable size shape, shape consistency will be checked at runtime.
// If dims mismatch, compare agian with full dims in case the tensor is
// If dims mismatch, compare again with full dims in case the tensor is
// used for both non-batching model and batching model. In that case, it
// is acceptable if non-batching model shape is [-1, d_0, d_1, ..., d_n]
// while the batching model shape is [d_0, d_1, ..., d_n].
2 changes: 1 addition & 1 deletion src/filesystem/api.h
Original file line number Diff line number Diff line change
@@ -94,7 +94,7 @@ std::string BaseName(const std::string& path);
std::string DirName(const std::string& path);

/// Does a file or directory exist?
/// \param path The path to check for existance.
/// \param path The path to check for existence.
/// \param exists Returns true if file/dir exists
/// \return Error status if unable to perform the check
Status FileExists(const std::string& path, bool* exists);
4 changes: 2 additions & 2 deletions src/infer_request.cc
Original file line number Diff line number Diff line change
@@ -361,7 +361,7 @@ InferenceRequest::Release(
InferenceRequest*
InferenceRequest::CopyAsNull(const InferenceRequest& from)
{
// Create a copy of 'from' request with artifical inputs and no requested
// Create a copy of 'from' request with artificial inputs and no requested
// outputs. Maybe more efficient to share inputs and other metadata,
// but that binds the Null request with 'from' request's lifecycle.
std::unique_ptr<InferenceRequest> lrequest(
@@ -475,7 +475,7 @@ InferenceRequest::CopyAsNull(const InferenceRequest& from)
*new_input->MutableShapeWithBatchDim() = input.second.ShapeWithBatchDim();

// Note that the input that have max byte size will be responsible for
// holding the artifical data, while other inputs will hold a reference to
// holding the artificial data, while other inputs will hold a reference to
// it with byte size that matches 'from'
if (input.first == *max_input_name) {
new_input->SetData(data);
2 changes: 1 addition & 1 deletion src/infer_request.h
Original file line number Diff line number Diff line change
@@ -499,7 +499,7 @@ class InferenceRequest {
int64_t* memory_type_id);

// Add a callback to be invoked on releasing the request object from Triton.
// Multile callbacks can be added by calling this function in order,
// Multiple callbacks can be added by calling this function in order,
// and they will be invoked in reversed order.
Status AddInternalReleaseCallback(std::function<void()>&& callback)
{
4 changes: 2 additions & 2 deletions src/infer_stats.h
Original file line number Diff line number Diff line change
@@ -141,15 +141,15 @@ class InferenceStatsAggregator {
const uint64_t cache_miss_duration_ns);

// Add durations to batch infer stats for a batch execution.
// 'success_request_count' is the number of sucess requests in the
// 'success_request_count' is the number of success requests in the
// batch that have infer_stats attached.
void UpdateInferBatchStats(
MetricModelReporter* metric_reporter, const size_t batch_size,
const uint64_t compute_start_ns, const uint64_t compute_input_end_ns,
const uint64_t compute_output_start_ns, const uint64_t compute_end_ns);

// Add durations to batch infer stats for a batch execution.
// 'success_request_count' is the number of sucess requests in the
// 'success_request_count' is the number of success requests in the
// batch that have infer_stats attached.
void UpdateInferBatchStatsWithDuration(
MetricModelReporter* metric_reporter, size_t batch_size,
2 changes: 1 addition & 1 deletion src/memory.h
Original file line number Diff line number Diff line change
@@ -161,7 +161,7 @@ class MutableMemory : public Memory {
class AllocatedMemory : public MutableMemory {
public:
// Create a continuous data buffer with 'byte_size', 'memory_type' and
// 'memory_type_id'. Note that the buffer may be created on different memeory
// 'memory_type_id'. Note that the buffer may be created on different memory
// type and memory type id if the original request type and id can not be
// satisfied, thus the function caller should always check the actual memory
// type and memory type id before use.
6 changes: 3 additions & 3 deletions src/model.h
Original file line number Diff line number Diff line change
@@ -83,8 +83,8 @@ struct ModelIdentifier {
return (namespace_ + "::" + name_);
}

// namespace is not a reflection of the model repository althought it is
// currently implmented to be the same as the repository of the model.
// namespace is not a reflection of the model repository although it is
// currently implemented to be the same as the repository of the model.
std::string namespace_;
// name is the name registered to Triton, it is the model directory name
// by default and may be overwritten.
@@ -99,7 +99,7 @@ class hash<triton::core::ModelIdentifier> {
public:
size_t operator()(const triton::core::ModelIdentifier& model_id) const
{
// trival hash for multiple entries
// trivial hash for multiple entries
// https://en.cppreference.com/w/cpp/utility/hash
return (
hash<std::string>()(model_id.namespace_) ^
2 changes: 1 addition & 1 deletion src/model_config_utils.cc
Original file line number Diff line number Diff line change
@@ -933,7 +933,7 @@ AutoCompleteBackendFields(

// There must be at least one version directory that we can inspect to
// attempt to determine the platform. If not, we skip autofill with file name.
// For now we allow multiple versions and only inspect the first verison
// For now we allow multiple versions and only inspect the first version
// directory to ensure it is valid. We can add more aggressive checks later.
const bool has_version = (version_dirs.size() != 0);
const auto version_path =
2 changes: 1 addition & 1 deletion src/model_lifecycle.h
Original file line number Diff line number Diff line change
@@ -177,7 +177,7 @@ class ModelLifeCycle {

// Start loading model with specified versions asynchronously.
// All versions that are being served will be unloaded only after
// the load is finished sucessfully.
// the load is finished successfully.
Status AsyncLoad(
const ModelIdentifier& model_id, const std::string& model_path,
const inference::ModelConfig& model_config, const bool is_config_provided,
Loading
Oops, something went wrong.

0 comments on commit 4f99899

Please sign in to comment.