Whereas one approach is to implement the ICloneable
interface (described here, so I won't regurgitate), here's a nice deep clone object copier I found on The Code Project a while ago and incorporated it into our code.
As mentioned elsewhere, it requires your objects to be serializable.
using System;
using System.IO;
using System.Runtime.Serialization;
using System.Runtime.Serialization.Formatters.Binary;
/// <summary>
/// Reference Article http://www.codeproject.com/KB/tips/SerializedObjectCloner.aspx
/// Provides a method for performing a deep copy of an object.
/// Binary Serialization is used to perform the copy.
/// </summary>
public static class ObjectCopier
{
/// <summary>
/// Perform a deep copy of the object via serialization.
/// </summary>
/// <typeparam name="T">The type of object being copied.</typeparam>
/// <param name="source">The object instance to copy.</param>
/// <returns>A deep copy of the object.</returns>
public static T Clone<T>(T source)
{
if (!typeof(T).IsSerializable)
{
throw new ArgumentException("The type must be serializable.", nameof(source));
}
// Don't serialize a null object, simply return the default for that object
if (ReferenceEquals(source, null)) return default;
using var Stream stream = new MemoryStream();
IFormatter formatter = new BinaryFormatter();
formatter.Serialize(stream, source);
stream.Seek(0, SeekOrigin.Begin);
return (T)formatter.Deserialize(stream);
}
}
The idea is that it serializes your object and then deserializes it into a fresh object. The benefit is that you don't have to concern yourself about cloning everything when an object gets too complex.
In case of you prefer to use the new extension methods of C# 3.0, change the method to have the following signature:
public static T Clone<T>(this T source)
{
// ...
}
Now the method call simply becomes objectBeingCloned.Clone();
.
EDIT (January 10 2015) Thought I'd revisit this, to mention I recently started using (Newtonsoft) Json to do this, it should be lighter, and avoids the overhead of [Serializable] tags. (NB @atconway has pointed out in the comments that private members are not cloned using the JSON method)
/// <summary>
/// Perform a deep Copy of the object, using Json as a serialization method. NOTE: Private members are not cloned using this method.
/// </summary>
/// <typeparam name="T">The type of object being copied.</typeparam>
/// <param name="source">The object instance to copy.</param>
/// <returns>The copied object.</returns>
public static T CloneJson<T>(this T source)
{
// Don't serialize a null object, simply return the default for that object
if (ReferenceEquals(source, null)) return default;
// initialize inner objects individually
// for example in default constructor some list property initialized with some values,
// but in 'source' these items are cleaned -
// without ObjectCreationHandling.Replace default constructor values will be added to result
var deserializeSettings = new JsonSerializerSettings {ObjectCreationHandling = ObjectCreationHandling.Replace};
return JsonConvert.DeserializeObject<T>(JsonConvert.SerializeObject(source), deserializeSettings);
}
Catch System.Exception
and switch on the types
catch (Exception ex)
{
if (ex is FormatException || ex is OverflowException)
{
WebId = Guid.Empty;
return;
}
throw;
}
Best Answer
Seriously belated edit: If you're using .NET 4.0 or later
The
File
class has a newReadLines
method which lazily enumerates lines rather than greedily reading them all into an array likeReadAllLines
. So now you can have both efficiency and conciseness with:Original Answer
If you're not too bothered about efficiency, you can simply write:
For a more efficient method you could do:
Edit: In response to questions about efficiency
The reason I said the second was more efficient was regarding memory usage, not necessarily speed. The first one loads the entire contents of the file into an array which means it must allocate at least as much memory as the size of the file. The second merely loops one line at a time so it never has to allocate more than one line's worth of memory at a time. This isn't that important for small files, but for larger files it could be an issue (if you try and find the number of lines in a 4GB file on a 32-bit system, for example, where there simply isn't enough user-mode address space to allocate an array this large).
In terms of speed I wouldn't expect there to be a lot in it. It's possible that ReadAllLines has some internal optimisations, but on the other hand it may have to allocate a massive chunk of memory. I'd guess that ReadAllLines might be faster for small files, but significantly slower for large files; though the only way to tell would be to measure it with a Stopwatch or code profiler.