Wednesday, December 3, 2025

AI Coding Agents Comparison 2025

AI is increasingly being used in coding these days. Many people discuss which model is better and what tasks different IDE/agent are better suited for, but few talk about which agent actually writes better code. One example is this experiment. It shows that even with same model some agents can successfully solve a problem while others cannot.

This is important because code quality can vary greatly between agents, even when using the same model.

Here are some tests to explore this difference.

For testing, the task will be to write a fairly simple and small application.

It will be a flutter desktop application that shows an EUR/USD candlestick chart.

A standard empty application will be created to provide a single starting point: flutter create testai_chart_quotes.

The First prompt will be most difficult and will ask the agent to implement the main part of the application. Subsequent prompts are logically smaller and will add features. Here are the prompts.

Prompt 1: Base implementation

Here is an empty Flutter Windows desktop application.
Transform it into a currency quotes app that:
1) Has main window that has a text input for days (default 50) and a refresh button. They located at the top of the window.
2) Has chart with candlesticks to show EUR/USD daily rates in the main window. Chart takes all window space except space for other controls.
3) Starts with empty data and only loads data when refresh button is clicked.
4) When refresh button clicked it loads quotes from Alpha Vantage API and shows them in the chart. It loads number of quotes specified in days control.


Prompt 2: Error handling

Add error handling and logging into local file.


Prompt 3: Indication

Add a loading indicator that shows during API requests and disables the refresh button.


Prompt 4: Input validation

Add input validation for the days field.


Prompt 5: Timeframes

Add buttons at the window top named: M1, M5, M30, H1, H4, D. They should change how quotes are shown. It should change quotes period respectively to: 1 minute, 5 minutes, 30 minutes, 1 hour, 4 hours, 1 day.
Button with current timeframe should be pressed, others are not. "D" is by default.
When button is pressed and it's not current timeframe then respective quotes should be loaded and shown.
Refresh button loads and shows qoutes with current timeframe.


Prompt 6: Indicator

Add checkbox "Show Simple Moving Average" (default is checked) at the window top, and add textbox after it labled "Period" (default 7).
If checkbox is checked then on the chart it showd be shown simple moving average with specified period calculated from Close price.

It's important to note that the Alpha Vantage API was chosen on AI's recommendation, but later it turned out that its demo access was very limited and provided very little functionality. Consequently, the validation of prompt 5 was limited to a compilation check only.

Software used:

  • Cursor 2.1.39. Models listed: Opus 4.5, Sonnet 4.5, GPT 5.1 Codex, GPT 5.1, Gemini 3 Pro, GPT 5.1 Codex Mini, Grok Code, GPT 4.1
  • VS Code 1.106.3
  • GitHub Copilot 1.388.0, GitHub Copilot Chat 0.33.3 (VS Code extension)
  • Roo Code 3.34.8 (VS Code extension)
  • Continue 1.2.11 (VS Code extension)
  • Kilo Code 4.125.1 (VS Code extension)

Below are the results. Some iterations are not listed or counted due to issues with the Alpha Vantage demo API and manual URL fixes required to make it work.

The source repository with all commits and a table with almost raw results is available here: https://github.com/liiws/testai-chart-quotes.

Below are the condensed results tables.

The following table highlights the best-performing models to showcase the current state of the art as a practical reference.

Agent Prompt # Amendment Iterations Total Time (mm:ss) Comment
Cursor, auto 1 0 3:34  
  2 1 4:19  
  3 0 0:39  
  4 0 0:38  
  5 0 1:49  
  6 0 2:31 Success
Copilot, Openrouter Grok 4.1 Fast 1 0 1:39  
  2 0 1:37  
  3 0 0:52  
  4 2 3:47 Success (Compilation error was fixed)

The exact model used by Cursor in "auto" mode is unknown, but it was likely more capable than Grok 4.1 Fast. The results show that while a simpler model is usually much faster, but if it fails then fixing the error can take significantly more time.

The following table uses the same GPT-4.1 model across different agents. This allows for a direct comparison of the agents.

Agent Prompt # Amendment Iterations Total Time (mm:ss) Cost ($) Comment
Cursor, GPT 4.1 1 3 1:21   Asked to edit pubspec.yaml and then run flutter commands manually
  2 0 0:17    
  3 0 0:10    
  4 0 0:10    
  5 0 0:35    
  6 0 0:24   Success
Cursor, GPT 4.1, Try 2 1 2 1:02   Chart still looks wrong. Stopped to try
Copilot, GPT 4.1 1 7 1:34   Runtime error. Stopped to try (same error)
Copilot, GPT 4.1, after Cursor GPT 4.1 prompt 1 fixed 2 0 0:32    
  3 0 0:11    
  4 0 0:08    
  5 1 0:33    
  6 3 0:51   Success
Copilot, Openrouter GPT 4.1 1 6 3:28 0.46 Chart looks wrong. Stopped to try (same wrong result)
Roo Code, Openrouter GPT 4.1 1 2 1:54 0.59 Chart looks wrong. Stopped to try (same error)
Continue, Openrouter GPT 4.1 1 0 0:11 0.01 Compilation error, many. Stopped to try (it could not edit files himself, everything manually)
Kilo Code, Openrouter GPT 4.1 1 4 3:14 0.97  
  2 0 0:28 0.19  
  3 0 0:17 0.16  
  4 0 0:20 0.06  
  5 0 0:35 0.24  
  6 1 1:52 0.54 Success (minor error fixed was no checkbox)

Note that the difference lies not only in whether the code compiles and runs, but also in the domain knowledge. For instance, Cursor in "auto" mode was able to find the proper URL for the demo API (using the "demo" key and omitting the format specifier, which fails in demo mode).

Even from this simple test it's clear that agents' code quality can be very different. Using the same GPT-4.1 model, Cursor successfully completed prompt 1 (it worked on the second try as well, although the chart looked slightly wrong). In contrast, Copilot and Roo Code failed to complete prompt 1, repeating the same error. Among the open-source tools, only Kilo Code managed to complete prompt 1 independently, although it asked the developer to add debug information manually.

The other aspect is cost. Pricing models differ greatly: Cursor uses a subscription with limited requests, while using your own key via Openrouter (or similar) means you only pay for the tokens you use, with no monthly fee. This allows you to choose between cheaper (or even free) models and more capable but expensive ones, depending on your current needs. Whether it's worth using a worse and cheaper model is a different question, which is beyond the scope of this article.

Conclusion

Here is a breakdown of each tool's performance.

Cursor proved to be a highly capable tool, delivering excellent results in this test.

Copilot produced worse code quality than Cursor, but it remains a good tool. Its major advantage is potential cost savings when used with a service like Openrouter. The initial run of Prompt 1 cost $0.16 (though the resulting app did not work).

Roo Code (and likely Cline) offer excellent automation, but the final code quality was poor. It also tends to use many more tokens than Copilot, making it less efficient. The first iteration of Prompt 1 cost $0.40 (for a non-functional application).

Continue appears to have failed at basic functionality during this test, as it could not edit files or run command, requiring all actions to be performed manually.

Kilo Code stands out as the only tool besides Cursor that successfully fixed Prompt 1. The first run cost $0.22 (for a non-working app), but it demonstrated the ability to guide the debugging process and resolve all subsequent errors.

The two most effective tools in this evaluation are Cursor and Kilo Code. They achieved similar high code quality on the primary task, but they operate on fundamentally different pricing models.

A special mention goes to Copilot. Despite its lower output quality, it offers a free tier and Openrouter compatibility. While less capable, it is also more token-efficient, making it a noteworthy budget-conscious option.

Monday, December 16, 2019

LINQ2DB vs EF Core Benchmark under .NET Framework 4.8 and Core 3.1

Recently .NET Core 3.1 was released. It's optimized more that .NET Framework and will be its successor soon.

This is why it's interesting to compare them.

Benchmark has done with the same rules.

Hardware: i7-8750H, DDR4-2667.
Software: Win 10 x64, 64 bit, .NET 4.8, Core 3.1, linq2db 2.9.4, EFCore 3.1, ADO for .NET Framework v4.0.30319 (standard), ADO for Core 4.8.0 (nuget package).

Simple TOP 10 query

ORM: LINQ2DB is very good for simple queries, twice better than EF Core. LINQ2DB raw query can be even better than calling manually ADO (might be measurement error).

Platform: Overall performance of Core is just a little bit worse than .NET. Context initialization for Core is slower, but query compilation and mapping are faster than under .NET.

Simple TOP 500 query

ORM: For a little bit less simple queries LINQ2DB is still faster than EF Core but not much.

Platform: Context initialization for Core is slower than for .NET, but overall performance is better.

Complex TOP 10 query

ORM: LINQ2DB is slightly faster than EF Core but difference is just small.

Platform: Core is a little bit faster than .NET but nearly the same.

Complex TOP 500 query

The picture is interesting.

ORM: For complex queries with many rows EF Core sometimes can be faster than LINQ2DB and even ADO.

Conclusion

EF Core is a lot faster now than before.

.NET Core can work faster than .NET Framework when program is written with its optimizations. Such example is EF Core. You can see that ADO (despite it has separate version for Core) and LINQ2DB work different under .NET and Core but this difference is not as big as for EF Core.

EF Core works much faster under .NET Core - it's visible very good on the last picture - only EF Core has significant difference.

Now, with the .NET Core and EF Core both 3.1 they both may be a better choice than LINQ2DB.

EF Core is slightly slower but not much and sometimes even faster.

EF Core also has very good support from a biggest company, unlike LINQ2DB where you cannot count on it.

Another side of the choice is LTS.

.NET Core even LTS has only 3 years of support, unlike .NET Framework that has 10+ years of support (from Windows 10 LTSC version). It doesn't matter and even good if you're going to be in tune with newest tools, but it can be not so good otherwise.

Raw results (Excel).

View project source code at GitHub.

Friday, August 18, 2017

Entity Framework Core 2.0 and LINQ2DB Performace

We will look at EF Core 2.0 performance comparing to LINQ2DB - the fastest nowadays ORM. And also Entity Framework 6.

Hardware used: i5-4200H, DDR3-1600, Win 10 x64 1607.

Software used: SQL Server 2016 SP1, VS 2017, .NET 4.6.1, EF 6.1.3, LINQ to DB 1.8.3, EF Core 2.0.

Northwind database will be used.

You can see SQL-queries and testing methods in one of previous article.

Simple TOP 10 query

Further the gray part of bar denotes the context initialization.

EF Core still has a big overhead comparing to ADO.NET and LINQ2DB on simple queries. In my opinion performance impact can be from 50-70% for simple systems to 5-10% for enterprise systems (this is for simple queries).

Also you can see EF Core can't execute raw SQL queries fast. And EF Core can't execute custom result - you limited to entities.

Compiled EF Core queries are a bit faster (but still far from ADO.NET or LINQ2DB).

Simple TOP 500 query

EF Core queries (both LINQ and raw SQL) performance is near Entity Framework 6. ADO.NET and LINQ2DB are a little faster.

EF Core raw SQL queries are dramatically slow, so you won't see them later.

Complex TOP 10 query

For complex queries EF Core looks pretty well and can be compared to ADO.NET and LINQ2DB. Note that ADO.NET and LINQ2DB raw queries are a bit faster.

Complex TOP 500 query

Complex queries with many rows - both LINQ and compiled EF Core queries performance is as ADO.NET and LINQ2DB.

Conclusion

EF Core 2.0 still can't be used for raw SQL queries nor for speed nor for impossibility of executing custom data result.

A bad thing also is that EF Core is slower for simple queries.

The one good thing about EF Core is that it is nearly as fast for complex queries as ADO.NET and LINQ2DB (but anyway a bit slower).

Raw results (Excel).

View project source code at GitHub.

Friday, February 24, 2017

Reflection vs Compiled Expression Performace

Performance of reflection and compiled expressions will be shown in this post.

There's a nice library ObjectListView which has lots of features, and also easy to use. Because it's not need to fill ListViewItem manually.

For User class:

class User
{
    public int Id;
    public string Name;
    public DateTime BirthDate;
}

instead of this code:

var lvis = new List<ListViewItem>();
foreach (var user in users)
{
    lvis.Add(new ListViewItem(new[]
    {
        user.Id.ToString(),
        user.Name,
        user.BirthDate.ToString(),
    }));
}

you can simply pass collection of your own classes:

objectListView.Objects = users;

This library is an example of where reflection can be used.

But what should be used - reflection, or compiled expressions, or emit? The following tests will show. Except emit - it won't be tested because it's difficult to use it. Assumption about emit can be made looking on manual (speed) and compiled expression (startup overhead) tests.

Three tests will be made:

  1. Manual.
  2. Reflection.
  3. Compiled expression.

Each test consists of 200 iteration for warmup and 200 iterations for test itself.

Every test creates list of ListViewItem for specified object type. Except manual test which is only for User type.

Hardware: i5-4200H, DDR3-1600, Win 10 x64 1607. Software: VS 2015, .NET 4.6.1.

Code for manual test:

public static List<ListViewItem> CreateListItemsManual(List<User> users)
{
    var items = new List<ListViewItem>();
    foreach (var user in users)
    {
        var subitems = new[]
    {
            user.Id.ToString(),
            user.Name,
            user.BirthDate.ToString("dd.MM.yyyy (ddd)"),
        };
        var lvi = new ListViewItem(subitems);
        items.Add(lvi);
    }
    return items;
}

Code for reflection test:

public static List<ListViewItem> CreateListItemsReflection(Type type, IEnumerable<object> users)
{
    var items = new List<ListViewItem>();
    var fields = type.GetFields();
    foreach (var user in users)
    {
        var subitems = new string[fields.Length];
        for (int i = 0; i < fields.Length; i++)
        {
            string value;
            var field = fields[i];
            if (field.FieldType == typeof(string))
            {
                value = (string)field.GetValue(user);
            }
            else if (field.FieldType == typeof(int))
            {
                value = ((int)field.GetValue(user)).ToString();
            }
            else if (field.FieldType == typeof(DateTime))
            {
                value = ((DateTime)field.GetValue(user)).ToString("dd.MM.yyyy (ddd)");
            }
            else
            {
                value = field.GetValue(user).ToString();
            }
            subitems[i] = value;
        }
        var lvi = new ListViewItem(subitems);
        items.Add(lvi);
    }
    return items;
}

Code for compiled expression test:

public static List<ListViewItem> CreateListItemsCompiledExpression(Type type, IEnumerable<object> users)
{
    var items = new List<ListViewItem>();
    var fields = type.GetFields();
    Func<object, string>[] fieldGetters = new Func<object, string>[fields.Length];
    for (int i = 0; i < fields.Length; i++)
    {
        Func<object, string> fieldGetter;
        Expression<Func<object, string>> lambda;
        var field = fields[i];
        // user => 
        var userObject = Expression.Parameter(typeof(object), "user");
        // user => (User)user
        var user = Expression.Convert(userObject, type);
        // user => ((User)user)."Field"
        var fld = Expression.Field(user, field);
        if (field.FieldType == typeof(string))
        {
            // user => ((User)user)."Field"
            lambda = Expression.Lambda<Func<object, string>>(fld, userObject);
        }
        else if (field.FieldType == typeof(int))
        {
            // user => ((User)user)."Field".ToString() // int.ToString()
            var toString = Expression.Call(fld, typeof(int).GetMethod("ToString", new Type[0]));
            lambda = Expression.Lambda<Func<object, string>>(toString, userObject);
        }
        else if (field.FieldType == typeof(DateTime))
        {
            // user => ((User)user)."Field".ToString("dd.MM.yyyy (ddd)")
            var toString = Expression.Call(
                fld,
                typeof(DateTime).GetMethod("ToString", new Type[] { typeof(string) }),
                Expression.Constant("dd.MM.yyyy (ddd)"));
            lambda = Expression.Lambda<Func<object, string>>(toString, userObject);
        }
        else
        {
            // user => ((User)user)."Field".ToString() // object.ToString()
            var toString = Expression.Call(fld, typeof(object).GetMethod("ToString", new Type[0]));
            lambda = Expression.Lambda<Func<object, string>>(toString, userObject);
        }
        fieldGetter = lambda.Compile();
        fieldGetters[i] = fieldGetter;
    }
    foreach (var user in users)
    {
        var subitems = new string[fields.Length];
        for (int i = 0; i < fields.Length; i++)
        {
            subitems[i] = fieldGetters[i](user);
        }
        var lvi = new ListViewItem(subitems);
        items.Add(lvi);
    }
    return items;
}

Results

There's no much difference in absolute time for case with not many items - ~0.5 ms. This time is startup overhead for expressions compilation. It doesn't make sense for UI - nobody can see 0.5 ms difference.

Let's see the whole graph below.

Reflection is slower for about 6-7 ms for 20,000 elements. Again, this is not the time that anyone can see in UI.

But what should be used in real life projects? Is it worth to write code for universal and simple usage using reflection/expression, or it's better to spend time and write specific code for every type manually to achieve best performance for both little and many elements?

For UI components, if it's definitely known that there won't be many elements, reflection can be used.

But what if it's server application and/or there can be cases with both little and many elements, and/or performance is required? Already for 100-200 elements first graph shows ~1.5x performance difference between manual and reflection methods.

Fortunately, in real applications used types are not being changed all the time while program runs. This means that once expressions are compiled they can be cached.

This way allows to use compiled expressions without startup overhead.

Script with raw (200 iterations) results (R).

View project source code at GitHub.

Saturday, February 11, 2017

EF Core vs LINQ2DB

Entity Framework Core recently got v1.1.0. Though it still lacks some critical features like "GROUP BY" SQL translation (see its roadmap) it's time to test it.

The following frameworks will be tested:

  1. Entity Framework CodeFirst (LINQ query, models generated from DB)
  2. Entity Framework (raw SQL query)
  3. ADO.NET
  4. LINQ to DB (LINQ query, model entities generated from DB)
  5. LINQ to DB (raw SQL query)
  6. Entity Framework Core (doesn't support raw SQL execution at this moment)

Hardware used: i5-4200H, DDR3-1600, Win 10 x64 1607.

Software used: SQL Server 2016 SP1, VS 2015, .NET 4.6.1, EF 6.1.3, LINQ to DB 1.7.5, EF Core 1.1.0.

And default Northwind database.

The tests are the same as in one of the previous articles.

Note: EF Core doesn't use "GROUP BY" in generated SQL, instead it processes it in memory. This can lead to high load on the database in production.

Context Initialization

EF Core's context initialization is twice faster than EF 6. It matters for simple and fast queries.

Simple TOP 10 query

Here and below the grey part of bar is context initialization.

We can see that EF Core is faster than EF 6 when running simple queries. Though it's faster than EF 6 both in context initialization as well as in everything else but it still slower twice than LINQ2DB in overall.

Depending on the usage it might not be so bad, because absolute time is low.

Simple TOP 500 query

Results are almost the same, but now EF Core not to far from ADO.NET and LINQ2DB.

Complex TOP 10 query

Almost no difference between frameworks, except EF 6 which is 2x slower than others.

Complex TOP 500 query

The complex query with many result rows makes all frameworks nearly the same (again except EF 6 which is 2x slower than others).

Conclusions

EF Core is faster than EF 6. It's a good thing. But it still can't use "GROUP BY" clause in SQL although it's 1.1.0 version released. It's bad.

Another bad thing about EF Core is that it doesn't support raw SQL execution. It almost doesn't matter for complex queries, but usually applications have many simple queries, and here EF Core is weak - it can't be optimized more. Change tracking doesn't affect selects, and the only way for optimization is raw SQL.

So, if performance is not significant then EF Core can be chose. Otherwise, it's even EF 6 might be more preferable because it supports raw SQL execution which will help in heavy queries.

And if performance is important, or if change tracking is not required, then LINQ2DB may be the best choice. LINQ2DB's LINQ queries are not very slower than raw ADO.NET even for simple queries. And if it's not enough then raw SQL can be used. LINQ2DB is not new, so it hasn't such plenty of bugs as EF Core now.

Raw results (Excel).

View project source code at GitHub.

Sunday, March 22, 2015

Performance of LINQ to DB vs Entity Framework vs BLToolkit vs ADO.NET

Last years BLToolkit is being developed slowly. The reason is its author Igor Tkachev decided to write a new ORM - LINQ to DB.

He says it provides fastest LINQ database access. And it supports 12 database providers including MSSQL, SQLite, Postgres. And it supports mass UPDATE and DELETE, and Bulk Copy.

Also, LINQ to DB provides mechanism similar to EF Code First generator. It has T4 template that generates code structure from the database. All you need is to put connection string and execute T4 template.

BLToolkit also supports LINQ, but it's main strength is not only speed but also mapping.

Let's compare it with other ORMs.

These tests were performed on i5-4200H, DDR3-1600, Win 8.1 x64, SQL Server 2014, VS 2013, .NET 4.5, EF 6.1.2, BLTookit 4.2.0, LINQ to DB 1.0.7.1. Default Northwind database was used.

Used tests are the same as in previous article.

There were 6 methods tested to work with database:

  1. DbContext CodeFirst (LINQ query, models generated from DB)
  2. DbContext CodeFirst (raw SQL query)
  3. ADO.NET
  4. Business Logic Toolkit (raw SQL query)
  5. LINQ to DB (LINQ query, models entities generated from DB)
  6. LINQ to DB (raw SQL query)

Context Initialization

EF CodeFirst with LINQ query takes much more time that others. LINQ to DB with LINQ query and CF wih raw SQL take nearly the same time, and take 2-3 times more than other raw SQL. But anyway for complex queries it doesn't matter much.

Simple TOP 10 query

Here and below the grey part of bar is context initialization.

For simple query, LINQ to DB query using LINQ takes twice more time than for raw SQL. But anyway it's much faster than EF, and even slightly faster that CF with raw SQL.

Simple TOP 500 query

When number of result rows is not small, then even for simple query difference is not much. It's because mapping takes significant part of time, and compilation of simple LINQ query is not so much. And we can see that new architecture for linq2db is more faster than BLToolkit. Even LINQ query with linq2db is faster that raw SQL with BLToolkit. LINQ to DB with raw SQL query is the same speed as ASO.NET.

Complex TOP 10 query

LINQ to DB compilation is very fast. This makes its LINQ query speed almost the same as ADO.NET. EF CF with LINQ query takes twice more time than others.

Complex TOP 500 query

Complex TOP 500 query results are the same.

Conclusions

LINQ to DB is very fast both with raw SQL or LINQ query. For simple and small queries it's possible to use raw SQL instead of LINQ, but absolute time makes almost no difference.

LINQ to DB is good choice for ORM if you don't need change-tracking. And if you don't need all BLToolkit mapping capabilities (linq2db supports type-to-type mapping).

Raw results (XSLT).

View project source code at GitHub.

Thursday, January 8, 2015

Entity Framework DbContext vs ObjectContext vs LINQ2SQL vs ADO.NET vs Business Logic Toolkit Performance

With Entity Framework Microsoft recommends using DbContext instead of ObjectContext. So let's compare their performance.

These tests were performed on i5-4200H, DDR3-1600, Win 8.1 x64, SQL Server 2014, VS 2013, .NET 4.5, EF 6.1.2. Default Northwind database was used.

Tests include two different queries (simple, complex) and two lengths (10, 500 rows). Simple query:

SELECT TOP 10 O.OrderID, O.OrderDate, C.Country, C.CompanyName
FROM Orders O
JOIN Customers C ON O.CustomerID = C.CustomerID

Complex query:

SELECT TOP 10 OD.Quantity, OD.UnitPrice, OD.Discount, O.ShipCountry, S.Country
FROM Orders O
JOIN [Order Details] OD ON O.OrderID = OD.OrderID
JOIN Products P ON OD.ProductID = P.ProductID
JOIN Categories Cat ON P.CategoryID = Cat.CategoryID
JOIN Suppliers S ON P.SupplierID = S.SupplierID
WHERE
    Cat.CategoryID IN (@categoryIds)
    AND S.SupplierID IN (@supplierIds)
ORDER BY OD.Discount DESC

There were 6 methods tested to work with database:

  1. DbContext CodeFirst (generated from DB)
  2. DbContext Designer (generated from DB)
  3. ObjectContext (generated from DB with EdmGen.exe)
  4. LINQ2SQL
  5. ADO.NET
  6. Business Logic Toolkit (raw SQL query)

Each method was tested with 1000 iterations (and 100 iterations to warm up).

Context Initialization

Since context initialization can't be measured directly, it was measured in the following way. Let say we executed a query:

using (var ctx = new MyContext())
{
    var list = ctx.Products.Where(r => r.Name.Length < 10).ToList();
}

then if we executed this query twice:

using (var ctx = new MyContext())
{
    var list = ctx.Products.Where(r => r.Name.Length < 10).ToList();
    var list2 = ctx.Products.Where(r => r.Name.Length < 10).ToList();
}

we got a system of linear equations:

q + ctx = x
2*q + ctx = y

and now it's easy to find context initialization time:

ctx = 2*x - y

Context initialization was measured using "Simple TOP 10" query.

Context initialization time for DbContext CodeFirst and Designer is nearly the same, while ObjectContext requires twice more time. ADO.NET and BLToolkit have nearly the same minimum time, thrice lower than DbContext. LINQ2SQL has twice lower time than DbContext.

But as you can see below context initialization time in absolute time doesn't make a big sense always.

Simple TOP 10 query

For the simple query with a few rows where request to database takes little time, EF query compilation takes almost all the time for DbContext and ObjectContext. LINQ2SQL takes twice more time than EF, because its mapping is slow (I'll tell why I think so below in "Complex TOP 10" test). BLToolkit takes slightly more time than ADO.NET. And I don't know why Precompiled ObjectContext takes less time than ADO.NET :) (but remember, this time is without context initialization). DbContext doesn't support precompiled queries at all.

With context initialization:

Simple TOP 500 query

Simple TOP 500 query takes more time to request data from database. This is the reason why DbContext and ObjectContext take only one half time more than ADO.NET, and third time more than BLToolkit and precompiled ObjectContext query.

With context initialization:

Complex TOP 10 query

Complex TOP 10 query has similar situation: EF query compilation time is comparable to query request to databse. This is why DbContext and ObjectContext takes only twice more time that ADO.NET and BLToolit.

As you remember, LINQ2SQL in "Simple TOP 10" test took more time than EF. And in this test it takes less time. We can suppose that query execution time() consists in the following steps:

  1. Context initialization - ctx()
  2. Query compilation - comp()
  3. Request to database - db()
  4. Mapping result - map()
Below is a little math :), where "1" is "Simple TOP 10" query, and "2" is this "Complex TOP 10" query.

time(L1) = ctx(L) + comp(L1) + db(1) + map(L1)
time(L2) = ctx(L) + comp(L2) + db(2) + map(L2)
time(EF1) = ctx(EF) + comp(EF1) + db(1) + map(EF1)
time(EF2) = ctx(EF) + comp(EF2) + db(2) + map(EF2)

ctx(L) = 0.09
ctx(EF) = 0.17
time(L1) = 0.87
time(EF1) = 0.55
time(L2) = 8.8
time(EF2) = 10.5

=>

0.87 = 0.09 + comp(L1) + db(1) + map(L1)
8.8 = 0.09 + comp(L2) + db(2) + map(L2)
0.55 = 0.17 + comp(EF1) + db(1) + map(EF1)
10.5 = 0.17 + comp(EF2) + db(2) + map(EF2)

=>

0.78 = comp(L1) + db(1) + map(L1)
7.9 = comp(L2) + db(2) + map(L2)
0.38 = comp(EF1) + db(1) + map(EF1)
10.33 = comp(EF2) + db(2) + map(EF2)

db(1) << (db2)
comp(L1) << comp(L2)
comp(EF1) << comp(EF2)
map(L1) ~= map(L2) = map(L)     // we can suggest this because both queries have 10 rows
map(EF1) ~= map(EF2) = map(EF)  // we can suggest this because both queries have 10 rows

=>

0.78 = comp(L1) + db(1) + map(L)        // (1)
7.9 = comp(L2) + db(2) + map(L)         // (2)
0.38 = comp(EF1) + db(1) + map(EF)      // (3)
10.33 = comp(EF2) + db(2) + map(EF)     // (4)

db(1) << (db2)
comp(L1) << comp(L2)
comp(EF1) << comp(EF2)

=> Let subtract (3) from (1), and (4) from (2)

0.4 = comp(L1) - comp(EF1) + map(L) - map(EF)         // (1)
-2.43 = comp(L2) - comp(EF2) + map(L) - map(EF)       // (2)

comp(L1) << comp(L2)
comp(EF1) << comp(EF2)

=> Let subtract (2) from (1)

2.83 = comp(L1) - comp(EF1) + map(L) - map(EF) - comp(L2) + comp(EF2) - map(L) + map(EF)

comp(L1) << comp(L2)
comp(EF1) << comp(EF2)

=>

2.83 = comp(L1) - comp(EF1) - comp(L2) + comp(EF2)

comp(L1) << comp(L2)
comp(EF1) << comp(EF2)

=>

comp(L2) - comp(L1) + 2.83 =  comp(EF2) - comp(EF1)    // (1)

comp(L1) << comp(L2)                                   // (2)
comp(EF1) << comp(EF2)                                 // (3)

=> Using comparisons (2) and (3)

comp(L2) + 2.83 ~=  comp(EF2)

So, we can say that LINQ2SQL takes lower time for compilation than EF. And if so, then EF takes lower time to map results. As I said above.

With context initialization:

Complex TOP 500 query

Complex TOP 500 query shows the same results as Complex TOP 10: time to request from database is comparable to compilation time, therefore DbContext and ObjectContext take only twice more time that ADO.NET and BLToolit.

With context initialization:

Conclusions

  • Context initialization for DbContext CodeFirst is slightly faster than for DbContext Designer (both generated from database).
  • Context initialization for ObjectContext is twice slower than for DbContext. But absolute time is not significant - 0.4 ms versus 0.2 ms.
  • LINQ2SQL can be faster than EF for complex queries, and also it can be precompiled for some queries.
  • EF has much faster mapping than LINQ2SQL.
  • ObjectContext is a bit slower than DbContext, but some queries can be precompiled (Parameters cannot be sequences).
  • BLToolkit doesn't provide compilation-time type checking, but it's fast nearly as ADO.NET and it has great mapping capabilities (this article is in Russian (main site is currently down) but you can understand a bit from code samples).

Raw results (XSLT).

View project source code at Bitbucket.