title | description | services | documentationcenter | author | manager | editor | ms.assetid | ms.service | ms.workload | ms.tgt_pltfrm | ms.devlang | ms.topic | ms.date | ms.author | ms.component |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Splunk to Azure Log Analytics | Microsoft Docs |
Assist for users who are familiar with Splunk in learning the Log Analytics query language. |
log-analytics |
bwren |
carmonm |
log-analytics |
na |
na |
na |
conceptual |
08/21/2018 |
bwren |
na |
This article is intended to assist users who are familiar with Splunk in learning the Log Analytics query language. Direct comparisons are made between the two to understand key differences and also similarities where you can leverage your existing knowledge.
The following table compares concepts and data structures between Splunk and Log Analytics.
Concept | Splunk | Log Analytics | Comment |
---|---|---|---|
Deployment unit | cluster | cluster | Log Analytics allows arbitrary cross cluster queries. Splunk does not. |
Data caches | buckets | Caching and retention policies | Controls the period and caching level for the data. This setting directly impacts the performance of queries and cost of the deployment. |
Logical partition of data | index | database | Allows logical separation of the data. Both implementations allow unions and joining across these partitions. |
Structured event metadata | N/A | table | Splunk does not have the concept exposed to the search language of event metadata. Log Analytics has the concept of a table, which has columns. Each event instance is mapped to a row. |
Data record | event | row | Terminology change only. |
Data record attribute | field | column | In Log Analytics, this is predefined as part of the table structure. In Splunk, each event has its own set of fields. |
Types | datatype | datatype | Log Analytics datatypes are more explicit as they are set on the columns. Both have the ability to work dynamically with data types and roughly equivalent set of datatypes including JSON support. |
Query and search | search | query | Concepts are essentially the same between both Log Analytics and Splunk. |
Event ingestion time | System Time | ingestion_time() | In Splunk, each event gets a system timestamp of the time that the event was indexed. In Log Analytics, you can define a policy called ingestion_time that exposes a system column that can be referenced through the ingestion_time() function. |
The following table specifies functions in Log Analytics that are equivalent to Splunk functions.
Splunk | Log Analytics | Comment |
---|---|---|
strcat | strcat() | (1) |
split | split() | (1) |
if | iff() | (1) |
tonumber | todouble() tolong() toint() |
(1) |
upper lower |
toupper() tolower() |
(1) |
replace | replace() | (1) Also note that while replace() takes three parameters in both products, the parameters are different. |
substr | substring() | (1) Also note that Splunk uses one-based indices. Log Analytics notes zero-based indices. |
tolower | tolower() | (1) |
toupper | toupper() | (1) |
match | matches regex | (2) |
regex | matches regex | In Splunk, regex is an operator. In Log Analytics, it's a relational operator. |
searchmatch | == | In Splunk, searchmatch allows searching for the exact string. |
random | rand() rand(n) |
Splunk's function returns a number from zero to 231-1. Log Analytics' returns a number between 0.0 and 1.0, or if a parameter provided, between 0 and n-1. |
now | now() | (1) |
relative_time | totimespan() | (1) In Log Analytics, Splunk's equivalent of relative_time(datetimeVal, offsetVal) is datetimeVal + totimespan(offsetVal). For example, search | eval n=relative_time(now(), "-1d@d") becomes ... | extend myTime = now() - totimespan("1d") . |
(1) In Splunk, the function is invoked with the eval
operator. In Log Analytics, it is used as part of extend
or project
.
(2) In Splunk, the function is invoked with the eval
operator. In Log Analytics, it can be used with the where
operator.
The following sections give examples of using different operators between Splunk and Log Analytics.
Note
For the purpose of the following example, the Splunk field rule maps to a table in Azure Log Analytics, and Splunk's default timestamp maps to the Logs Analytics ingestion_time() column.
In Splunk, you can omit the search
keyword and specify an unquoted string. In Azure Log Analytics you must start each search with find
, an unquoted string is a column name, and the lookup value must be a quoted string.
Splunk | search | search Session.Id="c8894ffd-e684-43c9-9125-42adc25cd3fc" earliest=-24h |
Log Analytics | find | find Session.Id=="c8894ffd-e684-43c9-9125-42adc25cd3fc" and ingestion_time()> ago(24h) |
Azure Log Analytics queries start from a tabular result set where the filter. In Splunk, filtering is the default operation on the current index. You can also use where
operator in Splunk, but it is not recommended.
Splunk | search | Event.Rule="330009.2" Session.Id="c8894ffd-e684-43c9-9125-42adc25cd3fc" _indextime>-24h |
Log Analytics | where | Office_Hub_OHubBGTaskError |
Azure Log Analytics also supports take
as an alias to limit
. In Splunk, if the results are ordered, head
will return the first n results. In Azure Log Analytics, limit is not ordered but returns the first n rows that are found.
Splunk | head | Event.Rule=330009.2 |
Log Analytics | limit | Office_Hub_OHubBGTaskError |
For bottom results, in Splunk you use tail
. In Azure Log Analytics you can specify the ordering direction with asc
.
Splunk | head | Event.Rule="330009.2" |
Log Analytics | top | Office_Hub_OHubBGTaskError |
Splunk also has an eval
function, which is not to be comparable with the eval
operator. Both the eval
operator in Splunk and the extend
operator in Azure Log Analytics only support scalar functions and arithmetic operators.
Splunk | eval | Event.Rule=330009.2 |
Log Analytics | extend | Office_Hub_OHubBGTaskError |
Azure Log Analytics uses the same operator to rename and to create a new field. Splunk has two separate operators, eval
and rename
.
Splunk | rename | Event.Rule=330009.2 |
Log Analytics | extend | Office_Hub_OHubBGTaskError |
Splunk does not seem to have an operator similar to project-away
. You can use the UI to filter away fields.
Splunk | table | Event.Rule=330009.2 |
Log Analytics | project project-away |
Office_Hub_OHubBGTaskError |
See the Aggregations in Log Analytics queries for the different aggregation functions.
Splunk | stats | search (Rule=120502.*) |
Log Analytics | summarize | Office_Hub_OHubBGTaskError |
Join in Splunk has significant limitations. The subquery has a limit of 10000 results (set in the deployment configuration file), and there a limited number of join flavors.
Splunk | join | Event.Rule=120103* | stats by Client.Id, Data.Alias |
Log Analytics | join | cluster("OAriaPPT").database("Office PowerPoint").Office_PowerPoint_PPT_Exceptions |
In Splunk, to sort in ascending order you must use the reverse
operator. Azure Log Analytics also supports defining where to put nulls, at the beginning or at the end.
Splunk | sort | Event.Rule=120103 |
Log Analytics | order by | Office_Hub_OHubBGTaskError |
This is a similar operator in both Splunk and Log Analytics.
Splunk | mvexpand | mvexpand foo |
Log Analytics | mvexpand | mvexpand foo |
In the Log Analytics portal, only the first column is exposed. All columns are available through the API.
Splunk | fields | Event.Rule=330009.2 |
Log Analytics | facets | Office_Excel_BI_PivotTableCreate |
You can use summarize arg_min()
instead to reverse the order of which record gets chosen.
Splunk | dedup | Event.Rule=330009.2 |
Log Analytics | summarize arg_max() | Office_Excel_BI_PivotTableCreate |
- Go through a lesson on the writing queries in Log Analytics.