Skip to content

Commit 8763cb4

Browse files
Jérôme Glissetorvalds
authored andcommitted
mm/migrate: new memory migration helper for use with device memory
This patch add a new memory migration helpers, which migrate memory backing a range of virtual address of a process to different memory (which can be allocated through special allocator). It differs from numa migration by working on a range of virtual address and thus by doing migration in chunk that can be large enough to use DMA engine or special copy offloading engine. Expected users are any one with heterogeneous memory where different memory have different characteristics (latency, bandwidth, ...). As an example IBM platform with CAPI bus can make use of this feature to migrate between regular memory and CAPI device memory. New CPU architecture with a pool of high performance memory not manage as cache but presented as regular memory (while being faster and with lower latency than DDR) will also be prime user of this patch. Migration to private device memory will be useful for device that have large pool of such like GPU, NVidia plans to use HMM for that. Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Jérôme Glisse <[email protected]> Signed-off-by: Evgeny Baskakov <[email protected]> Signed-off-by: John Hubbard <[email protected]> Signed-off-by: Mark Hairgrove <[email protected]> Signed-off-by: Sherry Cheung <[email protected]> Signed-off-by: Subhash Gutti <[email protected]> Cc: Aneesh Kumar <[email protected]> Cc: Balbir Singh <[email protected]> Cc: Benjamin Herrenschmidt <[email protected]> Cc: Dan Williams <[email protected]> Cc: David Nellans <[email protected]> Cc: Johannes Weiner <[email protected]> Cc: Kirill A. Shutemov <[email protected]> Cc: Michal Hocko <[email protected]> Cc: Paul E. McKenney <[email protected]> Cc: Ross Zwisler <[email protected]> Cc: Vladimir Davydov <[email protected]> Cc: Bob Liu <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
1 parent 2916ecc commit 8763cb4

File tree

2 files changed

+596
-0
lines changed

2 files changed

+596
-0
lines changed

include/linux/migrate.h

Lines changed: 104 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -156,4 +156,108 @@ static inline int migrate_misplaced_transhuge_page(struct mm_struct *mm,
156156
}
157157
#endif /* CONFIG_NUMA_BALANCING && CONFIG_TRANSPARENT_HUGEPAGE*/
158158

159+
160+
#ifdef CONFIG_MIGRATION
161+
162+
#define MIGRATE_PFN_VALID (1UL << 0)
163+
#define MIGRATE_PFN_MIGRATE (1UL << 1)
164+
#define MIGRATE_PFN_LOCKED (1UL << 2)
165+
#define MIGRATE_PFN_WRITE (1UL << 3)
166+
#define MIGRATE_PFN_ERROR (1UL << 4)
167+
#define MIGRATE_PFN_SHIFT 5
168+
169+
static inline struct page *migrate_pfn_to_page(unsigned long mpfn)
170+
{
171+
if (!(mpfn & MIGRATE_PFN_VALID))
172+
return NULL;
173+
return pfn_to_page(mpfn >> MIGRATE_PFN_SHIFT);
174+
}
175+
176+
static inline unsigned long migrate_pfn(unsigned long pfn)
177+
{
178+
return (pfn << MIGRATE_PFN_SHIFT) | MIGRATE_PFN_VALID;
179+
}
180+
181+
/*
182+
* struct migrate_vma_ops - migrate operation callback
183+
*
184+
* @alloc_and_copy: alloc destination memory and copy source memory to it
185+
* @finalize_and_map: allow caller to map the successfully migrated pages
186+
*
187+
*
188+
* The alloc_and_copy() callback happens once all source pages have been locked,
189+
* unmapped and checked (checked whether pinned or not). All pages that can be
190+
* migrated will have an entry in the src array set with the pfn value of the
191+
* page and with the MIGRATE_PFN_VALID and MIGRATE_PFN_MIGRATE flag set (other
192+
* flags might be set but should be ignored by the callback).
193+
*
194+
* The alloc_and_copy() callback can then allocate destination memory and copy
195+
* source memory to it for all those entries (ie with MIGRATE_PFN_VALID and
196+
* MIGRATE_PFN_MIGRATE flag set). Once these are allocated and copied, the
197+
* callback must update each corresponding entry in the dst array with the pfn
198+
* value of the destination page and with the MIGRATE_PFN_VALID and
199+
* MIGRATE_PFN_LOCKED flags set (destination pages must have their struct pages
200+
* locked, via lock_page()).
201+
*
202+
* At this point the alloc_and_copy() callback is done and returns.
203+
*
204+
* Note that the callback does not have to migrate all the pages that are
205+
* marked with MIGRATE_PFN_MIGRATE flag in src array unless this is a migration
206+
* from device memory to system memory (ie the MIGRATE_PFN_DEVICE flag is also
207+
* set in the src array entry). If the device driver cannot migrate a device
208+
* page back to system memory, then it must set the corresponding dst array
209+
* entry to MIGRATE_PFN_ERROR. This will trigger a SIGBUS if CPU tries to
210+
* access any of the virtual addresses originally backed by this page. Because
211+
* a SIGBUS is such a severe result for the userspace process, the device
212+
* driver should avoid setting MIGRATE_PFN_ERROR unless it is really in an
213+
* unrecoverable state.
214+
*
215+
* THE alloc_and_copy() CALLBACK MUST NOT CHANGE ANY OF THE SRC ARRAY ENTRIES
216+
* OR BAD THINGS WILL HAPPEN !
217+
*
218+
*
219+
* The finalize_and_map() callback happens after struct page migration from
220+
* source to destination (destination struct pages are the struct pages for the
221+
* memory allocated by the alloc_and_copy() callback). Migration can fail, and
222+
* thus the finalize_and_map() allows the driver to inspect which pages were
223+
* successfully migrated, and which were not. Successfully migrated pages will
224+
* have the MIGRATE_PFN_MIGRATE flag set for their src array entry.
225+
*
226+
* It is safe to update device page table from within the finalize_and_map()
227+
* callback because both destination and source page are still locked, and the
228+
* mmap_sem is held in read mode (hence no one can unmap the range being
229+
* migrated).
230+
*
231+
* Once callback is done cleaning up things and updating its page table (if it
232+
* chose to do so, this is not an obligation) then it returns. At this point,
233+
* the HMM core will finish up the final steps, and the migration is complete.
234+
*
235+
* THE finalize_and_map() CALLBACK MUST NOT CHANGE ANY OF THE SRC OR DST ARRAY
236+
* ENTRIES OR BAD THINGS WILL HAPPEN !
237+
*/
238+
struct migrate_vma_ops {
239+
void (*alloc_and_copy)(struct vm_area_struct *vma,
240+
const unsigned long *src,
241+
unsigned long *dst,
242+
unsigned long start,
243+
unsigned long end,
244+
void *private);
245+
void (*finalize_and_map)(struct vm_area_struct *vma,
246+
const unsigned long *src,
247+
const unsigned long *dst,
248+
unsigned long start,
249+
unsigned long end,
250+
void *private);
251+
};
252+
253+
int migrate_vma(const struct migrate_vma_ops *ops,
254+
struct vm_area_struct *vma,
255+
unsigned long start,
256+
unsigned long end,
257+
unsigned long *src,
258+
unsigned long *dst,
259+
void *private);
260+
261+
#endif /* CONFIG_MIGRATION */
262+
159263
#endif /* _LINUX_MIGRATE_H */

0 commit comments

Comments
 (0)