Loading...
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 | // SPDX-License-Identifier: GPL-2.0-only /* * Copyright (C) 2020 Google LLC * Author: Quentin Perret <qperret@google.com> */ #include <asm/kvm_hyp.h> #include <nvhe/gfp.h> u64 __hyp_vmemmap; /* * Index the hyp_vmemmap to find a potential buddy page, but make no assumption * about its current state. * * Example buddy-tree for a 4-pages physically contiguous pool: * * o : Page 3 * / * o-o : Page 2 * / * / o : Page 1 * / / * o---o-o : Page 0 * Order 2 1 0 * * Example of requests on this pool: * __find_buddy_nocheck(pool, page 0, order 0) => page 1 * __find_buddy_nocheck(pool, page 0, order 1) => page 2 * __find_buddy_nocheck(pool, page 1, order 0) => page 0 * __find_buddy_nocheck(pool, page 2, order 0) => page 3 */ static struct hyp_page *__find_buddy_nocheck(struct hyp_pool *pool, struct hyp_page *p, unsigned short order) { phys_addr_t addr = hyp_page_to_phys(p); addr ^= (PAGE_SIZE << order); /* * Don't return a page outside the pool range -- it belongs to * something else and may not be mapped in hyp_vmemmap. */ if (addr < pool->range_start || addr >= pool->range_end) return NULL; return hyp_phys_to_page(addr); } /* Find a buddy page currently available for allocation */ static struct hyp_page *__find_buddy_avail(struct hyp_pool *pool, struct hyp_page *p, unsigned short order) { struct hyp_page *buddy = __find_buddy_nocheck(pool, p, order); if (!buddy || buddy->order != order || buddy->refcount) return NULL; return buddy; } /* * Pages that are available for allocation are tracked in free-lists, so we use * the pages themselves to store the list nodes to avoid wasting space. As the * allocator always returns zeroed pages (which are zeroed on the hyp_put_page() * path to optimize allocation speed), we also need to clean-up the list node in * each page when we take it out of the list. */ static inline void page_remove_from_list(struct hyp_page *p) { struct list_head *node = hyp_page_to_virt(p); __list_del_entry(node); memset(node, 0, sizeof(*node)); } static inline void page_add_to_list(struct hyp_page *p, struct list_head *head) { struct list_head *node = hyp_page_to_virt(p); INIT_LIST_HEAD(node); list_add_tail(node, head); } static inline struct hyp_page *node_to_page(struct list_head *node) { return hyp_virt_to_page(node); } static void __hyp_attach_page(struct hyp_pool *pool, struct hyp_page *p) { phys_addr_t phys = hyp_page_to_phys(p); unsigned short order = p->order; struct hyp_page *buddy; memset(hyp_page_to_virt(p), 0, PAGE_SIZE << p->order); /* Skip coalescing for 'external' pages being freed into the pool. */ if (phys < pool->range_start || phys >= pool->range_end) goto insert; /* * Only the first struct hyp_page of a high-order page (otherwise known * as the 'head') should have p->order set. The non-head pages should * have p->order = HYP_NO_ORDER. Here @p may no longer be the head * after coalescing, so make sure to mark it HYP_NO_ORDER proactively. */ p->order = HYP_NO_ORDER; for (; (order + 1) <= pool->max_order; order++) { buddy = __find_buddy_avail(pool, p, order); if (!buddy) break; /* Take the buddy out of its list, and coalesce with @p */ page_remove_from_list(buddy); buddy->order = HYP_NO_ORDER; p = min(p, buddy); } insert: /* Mark the new head, and insert it */ p->order = order; page_add_to_list(p, &pool->free_area[order]); } static struct hyp_page *__hyp_extract_page(struct hyp_pool *pool, struct hyp_page *p, unsigned short order) { struct hyp_page *buddy; page_remove_from_list(p); while (p->order > order) { /* * The buddy of order n - 1 currently has HYP_NO_ORDER as it * is covered by a higher-level page (whose head is @p). Use * __find_buddy_nocheck() to find it and inject it in the * free_list[n - 1], effectively splitting @p in half. */ p->order--; buddy = __find_buddy_nocheck(pool, p, p->order); buddy->order = p->order; page_add_to_list(buddy, &pool->free_area[buddy->order]); } return p; } static void __hyp_put_page(struct hyp_pool *pool, struct hyp_page *p) { if (hyp_page_ref_dec_and_test(p)) __hyp_attach_page(pool, p); } /* * Changes to the buddy tree and page refcounts must be done with the hyp_pool * lock held. If a refcount change requires an update to the buddy tree (e.g. * hyp_put_page()), both operations must be done within the same critical * section to guarantee transient states (e.g. a page with null refcount but * not yet attached to a free list) can't be observed by well-behaved readers. */ void hyp_put_page(struct hyp_pool *pool, void *addr) { struct hyp_page *p = hyp_virt_to_page(addr); hyp_spin_lock(&pool->lock); __hyp_put_page(pool, p); hyp_spin_unlock(&pool->lock); } void hyp_get_page(struct hyp_pool *pool, void *addr) { struct hyp_page *p = hyp_virt_to_page(addr); hyp_spin_lock(&pool->lock); hyp_page_ref_inc(p); hyp_spin_unlock(&pool->lock); } void hyp_split_page(struct hyp_page *p) { unsigned short order = p->order; unsigned int i; p->order = 0; for (i = 1; i < (1 << order); i++) { struct hyp_page *tail = p + i; tail->order = 0; hyp_set_page_refcounted(tail); } } void *hyp_alloc_pages(struct hyp_pool *pool, unsigned short order) { unsigned short i = order; struct hyp_page *p; hyp_spin_lock(&pool->lock); /* Look for a high-enough-order page */ while (i <= pool->max_order && list_empty(&pool->free_area[i])) i++; if (i > pool->max_order) { hyp_spin_unlock(&pool->lock); return NULL; } /* Extract it from the tree at the right order */ p = node_to_page(pool->free_area[i].next); p = __hyp_extract_page(pool, p, order); hyp_set_page_refcounted(p); hyp_spin_unlock(&pool->lock); return hyp_page_to_virt(p); } int hyp_pool_init(struct hyp_pool *pool, u64 pfn, unsigned int nr_pages, unsigned int reserved_pages) { phys_addr_t phys = hyp_pfn_to_phys(pfn); struct hyp_page *p; int i; hyp_spin_lock_init(&pool->lock); pool->max_order = min(MAX_ORDER, get_order(nr_pages << PAGE_SHIFT)); for (i = 0; i <= pool->max_order; i++) INIT_LIST_HEAD(&pool->free_area[i]); pool->range_start = phys; pool->range_end = phys + (nr_pages << PAGE_SHIFT); /* Init the vmemmap portion */ p = hyp_phys_to_page(phys); for (i = 0; i < nr_pages; i++) hyp_set_page_refcounted(&p[i]); /* Attach the unused pages to the buddy tree */ for (i = reserved_pages; i < nr_pages; i++) __hyp_put_page(pool, &p[i]); return 0; } |