Skip to content

Commit

Permalink
Merge remote-tracking branch 'dotnet/master' into sqlperf-envchange
Browse files Browse the repository at this point in the history
Commit migrated from dotnet/corefx@e3862c8
  • Loading branch information
Wraith2 committed Mar 26, 2019
2 parents 3b5291c + e6f95a5 commit 848ab16
Show file tree
Hide file tree
Showing 463 changed files with 2,939 additions and 25,743 deletions.
2 changes: 1 addition & 1 deletion docs/libraries/coding-guidelines/breaking-change-rules.md
Original file line number Diff line number Diff line change
Expand Up @@ -99,7 +99,7 @@ Breaking Change Rules

* Adding the `checked` keyword to a code-block

This may cause code in a block to to begin to throwing exceptions, an unacceptable change.
This may cause code in a block to begin to throwing exceptions, an unacceptable change.

* Changing the order in which events are fired

Expand Down
4 changes: 2 additions & 2 deletions docs/libraries/coding-guidelines/interop-pinvokes.md
Original file line number Diff line number Diff line change
Expand Up @@ -33,7 +33,7 @@ Attributes
Strings
-------

When the CharSet is Unicode or the argument is explicitly marked as `[MarshalAs(UnmanagedType.LPWSTR)]` _and_ the string is passed by value (not `ref` or `out`) the string will be be pinned and used directly by native code (rather than copied).
When the CharSet is Unicode or the argument is explicitly marked as `[MarshalAs(UnmanagedType.LPWSTR)]` _and_ the string is passed by value (not `ref` or `out`) the string will be pinned and used directly by native code (rather than copied).

Remember to mark the `[DllImport]` as `Charset.Unicode` unless you explicitly want ANSI treatment of your strings.

Expand Down Expand Up @@ -220,7 +220,7 @@ Structs

Managed structs are created on the stack and aren't removed until the method returns. By definition then, they are "pinned" (it won't get moved by the GC). You can also simply take the address in unsafe code blocks if native code won't use the pointer past the end of the current method.

Blittable structs are much more performant as they they can simply be used directly by the marshalling layer. Try to make structs blittable (for example, avoid `bool`). See the "Blittable Types" section above for more details.
Blittable structs are much more performant as they can simply be used directly by the marshalling layer. Try to make structs blittable (for example, avoid `bool`). See the "Blittable Types" section above for more details.

*If* the struct is blittable use `sizeof()` instead of `Marshal.SizeOf<MyStruct>()` for better performance. As mentioned above, you can validate that the type is blittable by attempting to create a pinned `GCHandle`. If the type is not a string or considered blittable `GCHandle.Alloc` will throw an `ArgumentException`.

Expand Down
2 changes: 1 addition & 1 deletion docs/libraries/debugging/crash-dumps.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
Crash dumps can be useful for analyzing and debugging intermittent or hard-to-reproduce bugs. In all of our CI test runs and official build test runs, we use a utility called "Dumpling" to collect and archive crash dumps that are created during test execution. These crash dumps are archived on on the [Dumpling web portal](https://dumpling.azurewebsites.net/), which has download links, as well as auxiliary triage information gathered during crash dump collection.
Crash dumps can be useful for analyzing and debugging intermittent or hard-to-reproduce bugs. In all of our CI test runs and official build test runs, we use a utility called "Dumpling" to collect and archive crash dumps that are created during test execution. These crash dumps are archived on the [Dumpling web portal](https://dumpling.azurewebsites.net/), which has download links, as well as auxiliary triage information gathered during crash dump collection.

When a crash is encountered in a test run (and crash dump collection is enabled), the following information will be printed to the log:

Expand Down
2 changes: 1 addition & 1 deletion docs/libraries/project-docs/api-review-process.md
Original file line number Diff line number Diff line change
Expand Up @@ -42,7 +42,7 @@ APIs and some code samples that show how it should be used. If changes are neces

Pull requests against **corefx** shouldn't be submitted before getting approval. Also, we don't want to get work in progress (WIP). The reason being that we want to reduce the number pending PRs so that we can focus on the work the community expects we take action on.

If you want to collaborate with other people on the design, feel free to perform the work in a branch in your own fork. If you want to track your TODOs in the description of a PR, you can always submit a PR against your own fork. Also, feel free to advertise your PR by linking it from from the issue you filed against **corefx** in the first step above.
If you want to collaborate with other people on the design, feel free to perform the work in a branch in your own fork. If you want to track your TODOs in the description of a PR, you can always submit a PR against your own fork. Also, feel free to advertise your PR by linking it from the issue you filed against **corefx** in the first step above.

## API Design Guidelines

Expand Down
149 changes: 2 additions & 147 deletions docs/libraries/project-docs/benchmarking.md
Original file line number Diff line number Diff line change
@@ -1,150 +1,5 @@
# Benchmarking .NET Core applications

We recommend using [BenchmarkDotNet](https://github.com/dotnet/BenchmarkDotNet) as it allows specifying custom SDK paths and measuring performance not just in-proc but also out-of-proc as a dedicated executable.
All Benchmarks have been moved to the [dotnet/performance/](https://github.com/dotnet/performance/) repository.

```
<ItemGroup>
<PackageReference Include="BenchmarkDotNet" Version="0.11.3" />
</ItemGroup>
```

## Defining your benchmark

See [BenchmarkDotNet](https://benchmarkdotnet.org/articles/guides/getting-started.html) documentation -- minimally you need to adorn a public method with the `[Benchmark]` attribute but there are many other ways to customize what is done such as using parameter sets or setup/cleanup methods. Of course, you'll want to bracket just the relevant code in your benchmark, ensure there are sufficient iterations that you minimise noise, as well as leaving the machine otherwise idle while you measure.

# Benchmarking local CoreFX builds

Since `0.11.1` BenchmarkDotNet knows how to run benchmarks with CoreRun. So you just need to provide it the path to CoreRun! The simplest way to do that is via console line arguments:

dotnet run -c Release -f netcoreapp3.0 -- -f *MyBenchmarkName* --coreRun "C:\Projects\corefx\artifacts\bin\testhost\netcoreapp-Windows_NT-Release-x64\shared\Microsoft.NETCore.App\9.9.9\CoreRun.exe"

**Hint:** If you are curious to know what BDN does internally you just need to apply `[KeepBenchmarkFiles]` attribute to your class or set `KeepBenchmarkFiles = true` in your config file. After running the benchmarks you can find the auto-generated files in `%pathToBenchmarkApp\bin\Release\$TFM\` folder.

The alternative is to use `CoreRunToolchain` from code level:

```cs
class Program
{
static void Main(string[] args)
=> BenchmarkSwitcher.FromAssembly(typeof(Program).Assembly)
.Run(args, DefaultConfig.Instance.With(
Job.ShortRun.With(
new CoreRunToolchain(
new FileInfo(@"C:\Projects\corefx\bin\testhost\netcoreapp-Windows_NT-Release-x64\shared\Microsoft.NETCore.App\9.9.9\CoreRun.exe")
))));
}
```


**Warning:** To fully understand the results you need to know what optimizations (PGO, CrossGen) were applied to given build. Usually, CoreCLR installed with the .NET Core SDK will be fully optimized and the fastest. On Windows, you can use the [disassembly diagnoser](http://adamsitnik.com/Disassembly-Diagnoser/) to check the produced assembly code.

## New API

If you are testing some new APIs you need to tell BenchmarkDotNet where is `dotnet cli` that is capable of building the code. You can do that by using the `--cli` command line argument.

# Running in process

If you want to run your benchmarks without spawning a new process per benchmark you can do that by passing `-i` console line argument. Please be advised that using [InProcessToolchain](https://benchmarkdotnet.org/articles/configs/toolchains.html#sample-introinprocess) is not recommended when one of your benchmarks might have side effects which affect other benchmarks. A good example is heavy allocating benchmark which affects the size of GC generations.

dotnet run -c Release -f netcoreapp2.1 -- -f *MyBenchmarkName* -i

# Recommended workflow

1. Before you start benchmarking the code you need to build entire CoreFX in Release which is going to generate the right CoreRun bits for you:

C:\Projects\corefx>build.cmd -c Release -arch x64

After that, you should be able to find `CoreRun.exe` in a location similar to:

C:\Projects\corefx\artifacts\bin\testhost\netcoreapp-Windows_NT-Release-x64\shared\Microsoft.NETCore.App\9.9.9\CoreRun.exe

2. Create a new .NET Core console app using your favorite IDE
3. Install BenchmarkDotNet (0.11.1+)
4. Define the benchmarks and pass the arguments to BenchmarkSwitcher

```cs
class Program
{
static void Main(string[] args) => BenchmarkSwitcher.FromAssembly(typeof(Program).Assembly).Run(args);
}
```
5. Run the benchmarks using `--coreRun` from the first step. Save the results in a dedicated folder.

dotnet run -c Release -f netcoreapp3.0 -- -f * --coreRun "C:\Projects\corefx\artifacts\bin\testhost\netcoreapp-Windows_NT-Release-x64\shared\Microsoft.NETCore.App\9.9.9\CoreRun.exe" --artifacts ".\before"

6. Go to the corresponding CoreFX source folder (for example `corefx\src\System.Collections.Immutable`)
7. Apply the optimization that you want to test
8. Rebuild given CoreFX part in Release:

dotnet msbuild /p:ConfigurationGroup=Release

You should notice that given `.dll` file have been updated in the `CoreRun` folder.

9. Run the benchmarks using `--coreRun` from the first step. Save the results in a dedicated folder.

dotnet run -c Release -f netcoreapp3.0 -- -f * --coreRun "C:\Projects\corefx\artifacts\bin\testhost\netcoreapp-Windows_NT-Release-x64\shared\Microsoft.NETCore.App\9.9.9\CoreRun.exe" --artifacts ".\after"

10. Compare the results and repeat steps `7 - 9` until you are happy about the results.

## Benchmarking APIs implemented within System.Private.Corelib

1. The steps for this scenario are very similar to the above recommended workflow with a couple of extra steps to copy bits from one repo to the other. Before you start benchmarking the code you need to build entire CoreCLR in Release which is going to generate the `System.Private.Corelib.dll` for you:

C:\Projects\coreclr>build.cmd -release -skiptests

After that, you should be able to find `System.Private.Corelib.dll` in a location similar to:

C:\Projects\coreclr\bin\Product\Windows_NT.x64.Release

2. Build entire CoreFX in Release using your local private build of coreclr (See [Testing With Private CoreCLR Bits](https://github.com/dotnet/corefx/blob/master/Documentation/project-docs/developer-guide.md#testing-with-private-coreclr-bits))

C:\Projects\corefx>build.cmd -c Release /p:CoreCLROverridePath=C:\Projects\coreclr\bin\Product\Windows_NT.x64.Release

After that, you should be able to find `CoreRun.exe` in a location similar to:

C:\Projects\corefx\artifacts\bin\testhost\netcoreapp-Windows_NT-Release-x64\shared\Microsoft.NETCore.App\9.9.9\CoreRun.exe

3. Create a new .NET Core console app using your favorite IDE
4. Install BenchmarkDotNet (0.11.1+)
5. Define the benchmarks and pass the arguments to BenchmarkSwitcher

```cs
class Program
{
static void Main(string[] args) => BenchmarkSwitcher.FromAssembly(typeof(Program).Assembly).Run(args);
}
```
6. Run the benchmarks using `--coreRun` from the second step. Save the results in a dedicated folder.

dotnet run -c Release -f netcoreapp3.0 -- -f * --coreRun "C:\Projects\corefx\artifacts\bin\testhost\netcoreapp-Windows_NT-Release-x64\shared\Microsoft.NETCore.App\9.9.9\CoreRun.exe" --artifacts ".\before"

7. Go to the corresponding CoreCLR source folder where the API you want to change exists (for example `coreclr\src\System.Private.CoreLib\shared\System`)
8. Apply the optimization that you want to test
9. Rebuild System.Private.Corelib with your change (optionally adding `-skipnative` if the change is isolated to managed code):

C:\Projects\coreclr>build.cmd -release -skiptests -skipnative -skipbuildpackages

10. For the next step, you have one of two options:

- Rebuild given CoreFX part in Release:

C:\Projects\corefx>build.cmd -c Release /p:CoreCLROverridePath=C:\Projects\coreclr\bin\Product\Windows_NT.x64.Release

- Force refresh of CoreCLR hard-link copy (this ends up being much faster than the first option):

C:\Projects\corefx>build.cmd -restore -c Release /p:CoreCLROverridePath=C:\Projects\coreclr\bin\Product\Windows_NT.x64.Release

11. Run the benchmarks using `--coreRun` from the first step. Save the results in a dedicated folder.

dotnet run -c Release -f netcoreapp3.0 -- -f * --coreRun "C:\Projects\corefx\artifacts\bin\testhost\netcoreapp-Windows_NT-Release-x64\shared\Microsoft.NETCore.App\9.9.9\CoreRun.exe" --artifacts ".\after"

12. Compare the results and repeat steps `8 - 11` until you are happy about the results.

# Reporting results

Often in a Github Pull Request or issue you will want to share performance results to justify a change. If you add the `MarkdownExporter` job in the configuration (as you can see in Alternative 3), BenchmarkDotNet will have created a Markdown (*.md) file in the `BenchmarkDotNet.Artifacts` folder which you can paste in, along with the code you benchmarked.

# References
- [BenchmarkDotNet](http://benchmarkdotnet.org/)
- [BenchmarkDotNet Github](https://github.com/dotnet/BenchmarkDotNet)
- [.NET Core SDK](https://github.com/dotnet/core-setup)
Please read the [Benchmarking workflow for CoreFX](https://github.com/dotnet/performance/blob/master/docs/benchmarking-workflow-corefx.md) document to find out how to build and run the Benchmarks.
8 changes: 7 additions & 1 deletion docs/libraries/project-docs/developer-guide.md
Original file line number Diff line number Diff line change
Expand Up @@ -44,7 +44,7 @@ For more details on the build configurations see [project-guidelines](../coding-

The most common workflow for developers is to call `build` from the root once and then go and work on the individual library that you are trying to make changes for.

By default build only builds the product libraries and none of the tests. If you want to build the tests you can call `build -buildtests`. If you want to run the tests you can call `build -test` or `build -performanceTest`. To build and run the tests combine both arguments: `build -buildtests -test`. To build both the product libraries and the test libraries pass `build -build -buildtests` to the command line. If you want to further configure which test libraries to build you can pass `/p:TestProjectFilter=Tests|PerformanceTests` to the command.
By default build only builds the product libraries and none of the tests. If you want to build the tests you can call `build -buildtests`. If you want to run the tests you can call `build -test`. To build and run the tests combine both arguments: `build -buildtests -test`. To build both the product libraries and the test libraries pass `build -build -buildtests` to the command line.

If you invoke the build script without any argument the default arguments will be executed `-restore -build`. Note that -restore and -build are only implicit if no actions are passed in.

Expand Down Expand Up @@ -206,6 +206,12 @@ One can build in Debug or Release mode from the root by doing `build -c Release`

One can build 32- or 64-bit binaries or for any architecture by specifying in the root `build -arch [value]` or in a project `/p:ArchGroup=[value]` after the `dotnet msbuild` command.

### Benchmarks

All Benchmarks have been moved to the [dotnet/performance/](https://github.com/dotnet/performance/) repository.

Please read the [Benchmarking workflow for CoreFX](https://github.com/dotnet/performance/blob/master/docs/benchmarking-workflow-corefx.md) document to find out how to build and run the Benchmarks.

### Tests

We use the OSS testing framework [xunit](http://xunit.github.io/).
Expand Down
Loading

0 comments on commit 848ab16

Please sign in to comment.