cancel
Showing results for 
Search instead for 
Did you mean: 

Public release

404found
Hooked
Posts: 5
Registered: ‎29-05-2025

Re: Public release

At the risk of just being another 'me too' is there any room on the trial or update when it's getting rolled out to other high touch users?
Leanne_T
Plusnet Help Team
Plusnet Help Team
Posts: 117
Thanks: 82
Fixes: 2
Registered: ‎10-12-2024

Re: Public release

Morning @404found 

Thanks for posting and showing your interest.

If we have any availability for the trial in the future, we will be in touch.

Leanne.

 

mnotgninnep
Dabbler
Posts: 11
Thanks: 3
Registered: ‎09-08-2016

Re: Public release

Hi @Leanne_T

I’ve asked a couple of times but had no confirmation I’ve been added to the waiting list. Please add me.

Thank you.

Michael
Leanne_T
Plusnet Help Team
Plusnet Help Team
Posts: 117
Thanks: 82
Fixes: 2
Registered: ‎10-12-2024

Re: Public release

Hi @mnotgninnep 

I've shared the thread with the relevant team and if they need any extra people, we will be in touch. 

Thanks very much. 

Leanne.

mnotgninnep
Dabbler
Posts: 11
Thanks: 3
Registered: ‎09-08-2016

Re: Public release

Thank you. I appreciate it.
fjama1
Dabbler
Posts: 11
Thanks: 1
Registered: ‎22-02-2025

Re: Public release

 

Hi @dave ,

I’m prepping my pfSense Dual-WAN environment (YouFibre IPoE + Plusnet PPPoE) and need architectural clarity on how Plusnet BNGs handle delegated prefixes in multi-WAN failover scenarios:

  1. Prefix Persistence: Does the delegated /56 remain bound to the session if the PPPoE connection is established but idle (not the default gateway)?

  2. LCP Echo & Stability: Is there a timeout for idle IPv6 sessions, or will LCP echoes on the PPPoE layer keep the PD alive indefinitely?

  3. MTU Mismatch on Failover: Given the 1500 (IPoE) vs 1492 (PPPoE) mismatch, does the BNG reliably generate ICMPv6 "Packet Too Big" messages to force Path MTU Discovery during failover, or is manual MSS clamping recommended?

Keen to ensure my network is "Plusnet-ready" for the IPv6 rollout. Any technical insight is much appreciated.

Thanks,

 

MPC
Rising Star
Posts: 78
Thanks: 29
Registered: ‎14-02-2019

Re: Public release

Can't talk to most of that, but baby-jumbo is supported by the Plusnet setup, so you can configure with 1500 MTU if you set that up.  I don't know that it did anything significant to throughput for me, but felt better having the local LAN MTU matching the uplink.

In practical experience terms, when I was running a couple of HE tunnels - one on plusnet and one over Three 5G, I found clamping was more reliable than expecting icmp6 to be correctly returned.  ( And I'd configured both to have the same MTU )

 

Not sure how the pfSense setup would enable baby jumbo, but for Debian, it is a pre-up command on the underlying ethernet port, and then declaring 1500 for mtu and mru in the dsl-providers file.  ( I'm still old-school /etc/network/interfaces based but I imagine netplan has similar capability )

 

##
# The PlusNET PPPOE connection itself which uses enp7s0 physical interface
#
auto plusnet-provider
iface plusnet-provider inet ppp
provider plusnet-provider
pre-up /usr/sbin/ip link set enp7s0 mtu 1508 up || true
pre-up /usr/sbin/sysctl -w net.ipv6.conf.ppp0.accept_ra=2 || true
pre-up /usr/sbin/sysctl -w net.ipv6.conf.ppp0.forwarding=1 || true
post-up /root/bin/on_reboot_policy_routing_setup || true
post-up /usr/sbin/sysctl -w net.ipv6.conf.ppp0.accept_ra=2 || true
post-up /usr/sbin/sysctl -w net.ipv6.conf.ppp0.forwarding=1 || true
# post-up /usr/sbin/tc qdisc del dev ppp0 root || true
# post-up /usr/sbin/tc qdisc add dev ppp0 root sfq perturb 0 depth 127 limit 1023 headdrop || true
post-down /usr/sbin/ip link set enp7s0 down || true

 

Cheers,

Mark